Innovative

Cognitive

Computing

Innovative Cognitive Computing (IC2) bring together researchers in the fields of cognitive computing, artificial intelligence, computational intelligence, neuroinformatics, and psychology.

Simulating Human Thoughts Through Cognitive AI

Our research delivers computational models that learn, reason, and perceive like the human mind. From healthcare diagnostics to industry automation, IC2 drives tangible outcomes that advance both technology and human understanding.

Cognitive AI research illustration
5+ Research Domains

What Drives Our Research?

An interdisciplinary approach that builds AI systems capable of understanding context and human intent.

Deep Learning

Neural architectures for perception, generation, and reasoning at scale.

Natural Language Processing

Understanding, generating, and reasoning over human language at every level.

Information Retrieval

Search, recommendation, and knowledge extraction from large-scale corpora.

Human-Computer Interaction

Behavior analysis, user modeling, and interfaces that adapt to people.

Data Mining

Pattern discovery, anomaly detection, and predictive modeling across domains.

Neuroinformatics

Computational models of cognition bridging neuroscience and AI.

IC2 research team

Building Tomorrow's Leaders

IC2 cultivates the next generation of researchers through collaboration between distinguished faculty and emerging scholars. We provide real opportunities for students to contribute to innovative, publishable work.

10+ Researchers
50+ Publications
20+ Student Alumni

Latest updates from IC2

Research insights and articles

A Framework for Synthetic Audio Conversations
Generation using Large Language Models

A Framework for Synthetic Audio Conversations Generation using Large Language Models

Our Research
In this paper, we introduce ConversaSynth, a framework designed to generate synthetic conversation audio using large language models (LLMs) with multiple persona settings. The framework first creates diverse and coherent text-based dialogues across various topics, which are then converted into audio using text-to-speech (TTS) systems. Our experiments demonstrate that ConversaSynth