First of all, I think my project is useful and interesting. So I first made a list of scenarios and set a priority based on my abilities. First, I used it to show the abilities of AR glasses and to identify the modules needed for the project. Current functions include: Current Time Query, Line Query (By Bus), animal and plant identification, qrcode recognition, and installing new functions based on qrcode.

首先,我认为我的项目是有用和有趣的。因此,我首先列出了一个场景列表,并根据我的能力设置了优先级。首先,我用它来展示AR眼镜的能力,并确定项目所需的模块。当前功能包括:当前时间查询、线路查询(通过总线)、动植物识别、二维码识别、基于二维码安装新功能。

The project is divided into hardware part and software part. At present, in order to highlight the characteristics of AR glasses, only voice control is supported. 

项目包含硬件部分和软件部分。目前为了突出AR眼镜的特色,仅支持语音控制。

The hardware part is mainly controlled by raspberry pi. Input and output of voice control by using USB sound card. Use the camera to capture pictures and display them with LCD screen and atomic mirror to ensure that the photos we take are what we need.In the future, from a practical point of view, a rocker or even a Bluetooth keyboard may be added. In fact, I have included a Bluetooth keyboard / mouse in my project, but I can't make good use of them. As the basic spectacle frame, I chose the head mounted magnifying glass as the foundation, which temporarily meets the current needs and is very easy to make. This also simplifies the current hardware selection. In the future, I hope to make it light enough to make it more like a pair of glasses. If you can have a better design, I hope you can share it.

硬件部分主要由树莓派作为主控。通过使用USB声卡进行语音控制的输入输出。使用摄像头捕获图片,并使用LCD屏幕和原子镜进行显示以确保我们拍下的照片是我们所需要的。将来从实用的角度,可能会加入摇杆甚至蓝牙键盘。实际上目前我的项目中已经包含了一个蓝牙键盘/鼠标,只是我还不能很好的利用它们。作为基础的眼镜架,我选择了头戴式放大镜作为基础,暂时满足目前的需求,而且非常容易制作。这也简化了目前的硬件选择。将来我希望能够做得足够轻量化,使之更像一副眼镜。如果大家能有更好的设计,也希望能够分享。

The software part mainly uses cloud services, including offline wake-up, instruction recognition and a simple l process engine based on data bus.

软件部分主要使用云服务,本地包括离线唤醒、指令识别和一个简单的基于数据总线的l流程引擎。

Because I only use real-time recording for speech recognition, frequent calls to online speech recognition will waste a lot of resources. Therefore, an offline speech recognition module must be used to avoid unnecessary network speech recognition services.You can use buttons to trigger voice control. That's more practical, but it doesn't feel cool enough.

由于我才用的是实时录音进行语音识别的方式,频繁的调用在线语音识别将浪费大量的资源。为此,必须使用一个离线语音识别模块来避免不必要的网络语音识别服务。可以使用按键来触发语音控制,那样更实用但是感觉不够酷。

Instruction recognition is realized by Baidu AI voice service. It supports not only Chinese, but also English. It is friendly to English support. Of course, you can also use Google or Amazon's cloud services. There is no time to realize multi service support, but it is easy to implement based on software architecture.

指令识别使用百度AI的语音服务实现,它不仅支持中文,也可以支持英文,对英语支持比较友好。当然也可以使用谷歌或者亚马逊的云服务,暂时还没有时间实现多服务支持,但是基于软件架构是很容易实现的。

At present, my code is based on Chinese. If you are interested enough, I will convert it to support the English version and share it with you.

因为我是中国人,我使用了很多中国的资源和服务。您还可以使用适合自己的资源和服务。首先,我要感谢所有分享他们设计和成就的极客。他们的成就激励了我。