An agentic reasoning approach for enhancing robustness and reducing hallucinations in vision-language models.
Jul 1, 2025
Sparse adversarial attack for generating interpretable adversarial examples.
Jan 1, 2025
A neurosymbolic AI framework for robust network intrusion detection with uncertainty quantification.
Jan 1, 2025
A comprehensive survey of neurosymbolic AI approaches for network intrusion detection systems.
Jan 1, 2025
Comparing fine-tuning vs distillation for LLM compression in edge AI deployment.
Jan 1, 2025
Aug 1, 2023
Jul 1, 2023
Mar 1, 2023
Oct 1, 2022

URSABench is an open-source benchmarking suite for assessing Bayesian deep learning models and inference methods, focusing on uncertainty quantification, robustness, scalability, and accuracy in classification tasks for both server and edge GPUs.
Apr 1, 2022