Article-Journal

Hydra: An Agentic Reasoning Approach for Enhancing Adversarial Robustness and Mitigating Hallucinations in Vision-Language Models

An agentic reasoning approach for enhancing robustness and reducing hallucinations in vision-language models.

Jul 1, 2025

Towards Interpretable Adversarial Examples via Sparse Adversarial Attack

Sparse adversarial attack for generating interpretable adversarial examples.

Jan 1, 2025

Neurosymbolic Artificial Intelligence for Robust Network Intrusion Detection: From Scratch to Transfer Learning

A neurosymbolic AI framework for robust network intrusion detection with uncertainty quantification.

Jan 1, 2025

Neurosymbolic AI for network intrusion detection systems: A survey

A comprehensive survey of neurosymbolic AI approaches for network intrusion detection systems.

Jan 1, 2025

Constrained Edge AI Deployment: Fine-Tuning vs Distillation for LLM Compression

Comparing fine-tuning vs distillation for LLM compression in edge AI deployment.

Jan 1, 2025

Decentralized Bayesian learning with Metropolis-adjusted Hamiltonian Monte Carlo

Aug 1, 2023

Reducing classifier overconfidence against adversaries through graph algorithms

Jul 1, 2023

Enhancing Resilience in Mobile Edge Computing Under Processing Uncertainty

Mar 1, 2023

Maximizing Energy Efficiency With Channel Uncertainty Under Mutual Interference

Oct 1, 2022

URSABench: A System for Comprehensive Benchmarking of Bayesian Deep Neural Network Models and Inference Methods
URSABench: A System for Comprehensive Benchmarking of Bayesian Deep Neural Network Models and Inference Methods

URSABench is an open-source benchmarking suite for assessing Bayesian deep learning models and inference methods, focusing on uncertainty quantification, robustness, scalability, and accuracy in classification tasks for both server and edge GPUs.

Apr 1, 2022