Project Overview
Ei Mix is an AI-powered entertainment assistant that converts natural language into personalized music and audio experiences. Instead of manually searching for songs or playlists, users can describe the mood, activity, or experience they want, and the system automatically generates a tailored listening session.
Problem Statement
Current music platforms require users to search, scroll, and manually create playlists, even when they already know how they want to feel.
This creates friction between intention and experience.
People do not think in playlists—they think in moods, energy, and moments.
Proposed Solution
Ei Mix solves this problem by allowing users to interact with music and audio through simple voice or text prompts.
For example, a user can say:
“Play a calm late-night Afro mix,”
and the system generates a personalized playlist instantly.
This creates a more intuitive, efficient, and hands-free experience.
How It Works
Ei Mix processes user input and breaks it into key elements such as mood, genre, and activity.
The system then uses a recommendation engine to generate a sequence of content that matches those inputs.
The flow of the system is:
User Input → AI Understanding → Recommendation Engine → Playback Output
What I Built
For this project, I designed the full system architecture and interaction model for Ei Mix.
This includes:
- A prompt-based interaction system
- A recommendation logic system
- A multi-mode entertainment interface
- High-fidelity UI mockups
- A working demo simulation (prompt → output flow)
The prototype demonstrates how users can move from intention to experience in a single step.
Prototype / Demonstration
The current prototype includes:
This demonstrates how the platform would function in a real-world environment.
Project Completion Assessment
The project is currently in a refined prototype stage. The system design, UI mockups, and demonstration flow have been completed.
The remaining work focuses on finalizing the presentation, improving delivery, and recording the SIP Showcase video.
Progress Updates
Jan 22, 2026 – Progress Updates (SIP408)
Project focus has been refined to prioritize demonstrating the core innovation behind Ei Mix: voice-driven music control and intelligent mixing logic. The initial system scope has been narrowed to ensure the primary interaction flow can be proven functional before SME review. Current efforts focus on defining the minimum functionality required to validate the innovation before expanding its features.
The primary proof of innovation for SME review will be a demonstrable voice-driven interaction flow that shows how a user prompt results in an intelligent music response
- Phase 1 – Core Prototype
Functional UI with predefined response mapping - Phase 2 – Voice Integration
NLP classification of emotional intent - Phase 3 – Recommendation Logic
Dynamic content selection algorithm - Phase 4 – User Testing
Validation of response relevance and usability - Phase 5 – Deployment Architecture
Cloud integration and scalability design
🟡 March 20, 2026 – Progress Updates (SIP408/409)
- Designed system architecture
- Defined core features and flow
🟡 March 22, 2026 – Progress Updates (SIP408/409)
- Created high-fidelity UI mockups
- Built demo simulation flow
- Updated SIP website
🟡 March 24, 2026 – Progress Updates (SIP408/409)
- Finalized 5-minute presentation script
- Designed presentation slides
- Integrated visuals and demo elements.
Looking ahead, I see Ei-Mix as a starting point for a potentially real product. In the future, I plan to refine the user experience and expand the content sources, possibly incorporating biofeedback data (heart rate, mood tracking) to make the AI even more responsive or add support for other languages to widen its impact. I am also considering using more advanced deep learning models to improve the recommendation accuracy and personalization over time. Future development may explore expanded personalization models, additional content integrations, and improved recommendation algorithms.
Future iterations may include deployment to app platforms for extended testing and user feedback collection. This project demonstrates the applied integration of AI, software engineering, and human-centered system design principles.