H.A.R.M.O.N.I. Lab
Director: KMA Solaiman
Designing AI That Endures
At the H.A.R.M.O.N.I. Lab, we design AI systems that think beyond narrow objectives — systems that adapt, align, and reason in complex, real-world environments. From smart grids to policy documents, from missing persons to triage prediction, our research spans multiple modalities and domains, bridging real-world applications with foundational AI challenges.
Our work is organized into three core clusters, each linked to active 📌 projects and real-world applications:
-
Robust AI & Adaption to Novelty
Detecting and reasoning about unseen or shifting inputs to ensure safe operation in dynamic settings.
📌 Structure-Aware Novelty in Open-World AI • Novelty-Aware Smart Electric Grids
• Unsupervised Clustering and Structure Discovery -
Multimodal Information Retrieval and Reasoning
Aligning content across text, image, and graph modalities to enable open-world understanding.
📌 Multimodal Information Retrieval and Alignment • Motherboard Defect Detection
• Adaptive Prediction in Triage & Finance -
Human-Centered Decision Systems
Supporting real-world users with interpretable, context-aware AI for safety, health, and preferences.
Furthermore, in unstructured text, we investigate framing bias, stance perception, and subjective disagreement in human and LLM annotations. This includes media bias, political framing, and annotation typologies grounded in social and linguistic theory.
📌 Adaptive Prediction in Triage & Finance • Human-Centered Reasoning
📌 BiasLab: Explainable Political Bias Detection • Human Attribute Recognition from Unstructured Text
Applications We Explore
- Electricity theft, anomaly detection, and triage in smart energy systems
- Legal and policy document understanding
- Intelligent triage and decision support in healthcare
- Missing person retrieval and civic response systems
- Stock forecasting using news, sentiment, and historical signals
- Fault and defect detection in hardware (e.g., motherboards)
- Lingusitic Bias Detection in Political News Articles
Lab Projects
Novelty-Aware Smart Electric Grids
- NovASGrid: Novelty-Aware Smart-Grid Resilience ( 2025)
Multimodal Information Retrieval and Alignment
- Multimodal Information Retrieval for Open World with Edit Distance Weak Supervision (Submitted to ICDE 2024)
- Applying Machine Learning and Data Fusion to the "Missing Person" Problem (IEEE Computer 2022)
- Open-Learning Framework for Multi-modal Information Retrieval with Weakly Supervised Joint Embedding (AAAI Spring Symposium 2022)
- Surveillance Video Querying With A Human-in-the-Loop (HILDA@SIGMOD 2020)
- SKOD: A Framework for Situational Knowledge on Demand (POLY@VLDB 2019)
Structure-Aware Novelty in Open-World AI
- Domain Complexity Estimation for Distributed AI Systems in Open-World Perception Domain (Submitted to IEEE CogMI 2024)
- Measurement of Novelty Difficulty in Monopoly (AAAI Spring Symposium 2022)
- Dataset Augmentation with Generated Novelties (TransAI 2021)
Motherboard Defect Detection
- BoardVision: Deployment-ready and Robust Motherboard Defect Detection with Ensemble (Submitted to WACV 2026)
Unsupervised Clustering and Structure Discovery
- Minimal Parameter Clustering of Complex Shape Dataset with High Dimensional Dataset Compatibility (MPCACS) (BUET Thesis Poster Presentation 2014)
Human-Centered Reasoning & Recommendation Systems
Adaptive Prediction in Triage & Financial Markets
- TRIAGE-M: Triage from MIMIC — Emergency Triage Benchmark bridging Hospital-Rich and MCI-Like Field Simulation (Submitted to GenAI4Health at NeurIPS 2025)
BiasLab: Explainable Political Bias Detection
- BiasLab: Toward Explainable Political Bias Detection with Dual-Axis Annotations and Rationale Indicators (Presented at ICML MoFA 2025)
Course-Based Research Topics (CMSC 678 / CMSC 471 / CMSC 478)
Students in my courses occasionally pursue independent or group research projects that align with the H.A.R.M.O.N.I. Lab’s core research clusters.
These topics are examples of open project directions that students may select or adapt for future semesters.
Each will be scoped appropriately for course credit and designed around public data and independent code.
🔹 Causal Summarization of Safety Narratives
Cluster: Multimodal Interpretability & Reasoning (H.A.R.M.O.N.I. Cluster 3)
Explore how large language models (LLMs) can summarize and extract causal or contributing factors from publicly available safety reports or incident narratives.
Focus: prompting design, summarization quality, and interpretable causal reasoning.
(Uses public datasets only; conducted under H.A.R.M.O.N.I. Lab supervision for educational and exploratory research.)
🔹 Novelty Detection in Smart-Infrastructure Sensor Data
Cluster: Resilient & Novelty-Aware AI Systems (H.A.R.M.O.N.I. Cluster 1)
Investigate methods for identifying unusual events or rare patterns in open traffic-sensor datasets using independently developed code.
Focus: novelty detection, temporal patterning, and visualization of rare behaviors.
(Independent codebase and open data only.)
🔹 Survey + Evaluation of Sensor–Text Embedding Spaces
Cluster: Multimodal Representation Alignment (H.A.R.M.O.N.I. Cluster 2)
Examine how numerical sensor data (e.g., EEG or motion signals) and natural-language descriptions can be represented and compared in a shared latent space.
Focus: cross-modal embedding geometry, metric robustness, and visualization of alignment patterns.
(Uses public time-series datasets; conducted under H.A.R.M.O.N.I. Lab supervision for educational and exploratory research.)
✳️ Attribution & Ethics
All course projects must use publicly available datasets and independently written code.
Outstanding work may later be extended for academic publication under H.A.R.M.O.N.I. Lab supervision with appropriate student credit or acknowledgement.
Any resulting models, embeddings, or visualizations may not be reused externally without permission from instructor.
Join the Lab
We welcome students who are curious, motivated, and eager to build practical AI systems that tackle real-world complexity. Your time is valuable, and we strive to ensure that your contributions are recognized — through course credit, funding, or formal research roles.
Who We’re Looking For
- UMBC undergraduates, especially those interested in AI/ML, data systems, HCI, or applied computing.
- Graduate students pursuing thesis or project work.
- Independent contributors working on domain-driven or experimental projects.
- Students enrolled in or planning to take CMSC 471, 478, or 678 with Prof. Solaiman.
For UMBC Master’s and Undergraduate Students
While I currently do not have dedicated funding for master’s or undergraduate students, I am open to supervising independent projects and collaborative research for credit.
I strongly prefer to get to know students through coursework before working together. If you’re interested in joining the lab, please consider enrolling in one of my classes.
While I would like to work with many exceptional students, I may not always be able to accommodate everyone. However there are several useful resources and programs at UMBC:
- Research for Credit
CMSC 499/699: Independent StudyCMSC 698: Project in Computer ScienceCMSC 799: Master’s Thesis
- Funded Undergraduate Opportunities (UMBC-specific)
- Funded Graduate Fellowships (external)
- Cross-Faculty Collaboration
If you’re funded through another PI, we welcome joint mentorship and interdisciplinary project involvement.
Courtesy: List based off of Dr. Tejas Gokhale’s FAQ.
Collaborations
We collaborate across campus and beyond:
- UMBC Center for AI
- University of Maryland Medical Center (UMMC)
- UMB School of Pharmacy (SOP)
- Purdue University
- Past partnerships: MIT, NGC, DARPA, USC-ISI
Contact
KMA Solaiman
Assistant Teaching Professor, CSEE, UMBC
Director, H.A.R.M.O.N.I. Lab
📧 ksolaima@umbc.edu
🌐 Lab Website
If you’re driven by curiosity and care about the real-world impact of AI, we’d love to hear from you.