top of page
  • Facebook
  • Twitter
  • Instagram
Search

Week 8 (Final) Progress Update

  • Writer: 정원 배
    정원 배
  • Jun 20
  • 2 min read

Our AI Weather Art Assistant Is Complete!


After weeks of planning, coding, iterating, and testing — our AI-powered Weather Art assistant is complete and running beautifully on the Raspberry Pi with full voice support, natural language understanding, and a clear touchscreen interface designed for elderly users.


Final Code: Fully Integrated Voice + Touch Interface


This week, we finished integrating all major system components into a unified main.py application, optimized for Raspberry Pi with a 7” HDMI touchscreen. Here’s what our system can now do:


  • 🎙️ Speech-to-Text (STT): Hold a button and speak to issue commands

  • 🗣️ Text-to-Speech (TTS): Listen to news headlines or chatbot responses

  • 🔍 BERT NLU: Understand natural language input and route requests to the correct function

  • 🎵 YouTube Music Player: Type or speak to play songs

  • 📰 Topic-Based News Feed: Powered by The Guardian API

  • ☁️ City-Specific Weather Forecasts: Using OpenWeather API

  • 🗓️ Weekly Reminder System: View, add, and manage daily tasks

  • 💬 Ask AI: Chatbot powered by IBM Watsonx.ai and read out responses aloud


Mic + Speaker Now Working on Raspberry Pi


We configured USB microphone and speaker detection, enabling:

  • Audio recording via PyAudio

  • Real-time response playback (resampled when needed)

  • Compatibility with voice control even on low-resource hardware

Now users can talk and listen to the device, fully offline and Pi-native.


Ask AI: Intelligent Conversations


We’ve finished the integration of our Watsonx.ai chatbot, allowing the user to:

  • Type or speak a question

  • Receive a concise, voice-read response in under 10 seconds

  • See both the answer in text and hear it aloud via our audio pipeline

The response is displayed in a popup window and logged for traceability. This rounds out our vision of an informative, conversational companion.


Poster Finalized !


We also completed and printed our final research poster for the IBM project presentation, summarizing:

  • Our motivation to combat digital exclusion for the elderly

  • Design goals focused on simplicity, portability, and clarity

  • Tech stack: Python, Kivy, IBM Watson STT/TTS, Watsonx.ai, yt_dlp, and BERT

  • Future ideas: biometric authentication and rechargeable battery systems



Final Reflection


This project has been an amazing journey in user-centered AI, fusing modern technologies with design empathy to support elderly individuals living alone. Our final system is:

  • Accessible (emoji-enhanced visuals, voice-controlled)

  • Functional (music, news, weather, reminders, AI chat)

  • Portable and private (no cloud dependencies beyond APIs, press-to-talk design)



🙌 Thanks to IBM, Imperial, and Our Team


We extend our thanks to:

  • IBM for technical guidance and Watson API support

  • Imperial College London for resources and mentorship

  • Our supervisors Dr. Elina Spyrou and John McNamara

  • And to our team — Jungwon Bae, Saxon Shang, Yaohan Huang, Chai Zheng Khoon, Divine Wodi, and Guanxi Lu — for their collaboration and creativity.



 
 
 

留言


bottom of page