There is a hole in our educational process and it is called STEM. STEM is an acronym for Science, Technology, Engineering, and Mathematics. And since arts can teach you collaboration and teamwork, we're using the acronym STEAM. The goal of this Hackaday project is to use the Raspberry Pi 2 board in a way that we have never seen done before and will have definite applicability in the education of anybody in our society, from children through adults. If we can put these PicoClusters (PicoCluster is the name of our partner company that started this hardware project) together to handle large amounts of data at high data speeds then we have what we need; something we can put on the market in varying sizes and locations knowing that with smaller configurations we can meet the needs of smaller remote cities, but that we can also scale up to as much processing power as is needed. This is a gamble on our part in that we would normally rely on industry-standard hardware/software, likely in a cloud deployment.
The processing power needed in education is significant. The Yahoo Hadoop team has implemented up to a 4k node cluster. We're not going to be sequencing DNA or analyzing the data coming in from the worlds most powerful radio telescopes. But we could accumulate enough performance data on a student that we need scalable and performant hardware that can grow in size as new learners register in to the system. We will gather usage data at high rates. Our computer-adaptive elearning STEM curriculum is adaptive based on the student's profile (and many more types of cognitive performance and learning style data) and it is our intention also to use facial recognition software to return data on the emotion a learner is feeling at a rate of once every 2 - 3 seconds throughout each learners elearning session (there will need to be an age and/or volunteer opt-out as we don't want to download image data of younger learners and other older learners may have an objection). In these instances we will build the profile solely on the basis of learner performance on cognitive tasks and not include data from the affective domain.
And we need overall learning profile data for all individuals in our financially and educationally stratified society; we need a high-quality STEAM education available to every single member of our society. We need it available in schools, makerspaces, libraries; this list can go on and on. But it's not just the financial and educational stratification that is at stake. The larger project is deploying the software so that all individuals of our country (as well as the rest of the world) can utilize STEAM will not be possible if we don't get the performance we need at a cost we can afford. And this STEAM-educated world will create experts in big data and robotics. We need to create experts that will move our world into a new age of deep learning, big data, robotics, and man-machine interface devices. We will participate in and facilitate professional development for teachers who need STEAM training themselves. We are already in the process of exploring companies in Asia that have requested to distribute a major portion of our STEAM elearning system in both Hong Kong and Singapore, with the specific requirement that it be distributed in English (the accepted language of Science).
To summarize, we need a elearning and hardware system that is highly effective, available, affordable, and customizable, but we also need a software/hardware system that can flex enough to meet all of our challenges and requirements. This is not a small requirement, but we may be able to meet that requirement using parallel processing across many very small computers.
After integration testing we will have a functional prototype to populate data from a backup database into the testing environment. Sufficient data will be built by the prototype to allow us to performance test the elearning system on the 100 Raspberry Pi cluster.