Since the time I've posted my initial video on Youtube, there are few new features I've added:
- Open-AI Stream feature. This allow the reply from chatGPT to be shown real-time instead of at once.
- NTP Clock. Enable real time clock pulling from NTP server.
- New Clock UI.
- Finalize keyboard layout and functionalities.
- Improve Logic and animation features.
- Improve network interface and buffer robustness.
- More importantly, I have started implementing the adaptive real-time hardware prompting.
Adaptive Hardware Prompting
Well if you still hasn't grasp the concept of what I meant by adaptive hardware prompting, I can give you an example below. Perhaps first, if you have chatGPT open, try asking, what time is it right now. Or where do I live or where do I work etc. I guarantee they cannot provide you with an answer, that is because the LLM simply does not have provide real-time information inherently.
In the example below, however, as you can see If I ask the prompt how long have I work in my current employment, it can provide me directly with a clear answer. So essentially what's happening under the hood are:
- User ask the prompt/query.
- The Generative kAiboard checks and validate the prompt. In this case it "understood" context about the time and work.
- The Generative kAiboard capture relevant information from it's local knowledge server about current time and info about my current occupation.
- The generative kAiboard combine the original user prompt as in steps one together with information collected in step 3.
- At last, the user prompt is transmitted to openAI and the appropriate reply is returned.
And like you've perhaps seen in the video, your answers can then transmitted directly to your computer as though you are typing yourself.
Will keep you posted on more updates, I'm almost everyday making a commit update to my Github repo. So stay tuned.