Bret Victor had a few requirements for the UI beyond data collection. Most of this discussion is premature, but it could have an impact on the data collection aspect of the design, so I'll talk about it now.
There are three things Bret wanted to be able to see:
- Seeing Inside (Automatic Data Collection, Displaying Data, Taken for Granted)
- Seeing Across Time (Controlling Time, Automatic Notebook)
- Seeing Across Possibilities (Automatic Experimentation)
Seeing inside is the most baic layer of functionality. Every single bit of data about the robot should be collected, without any effort from the user ("Taken for Granted"), and displayed in a usable format automatically.
Once that is in place, we can display all of the data we have for this session in one graph, rewind to a previous point and see what the world was then, and compare the data from this session to previous sessions ("Automatic Notebook").
What I Can Do For Free
My main UI for viewing data in a particular session will handle seeing across time by default; all the data is displayed as a line graph. This takes care of both displaying the data, and seeing across time.
I interpret Controlling Time as being able to focus on a particular point in the past, and see highlighted data at that point. In that context, Controlling Time does not add any new requirements to data collection; it's just a slider on the UI.
Similarly, Automatic Notebooks are a trivial part of having multiple sessions in the first place. This just requires that the user is able to walk up, select a project, and start recording.
Taken for Granted
The first hard part is making the sensor data collection trivial. I'm assuming that the best way to do this is to instrument the sensors; provide a variety of small devices that can connect to most analog or digital sensors on one side, and any microcontroller on the other, and use those to do the data logging.
One alternative is to provide a number of sensors that work. The number of ways to measure some tiny scrap of our world is simply staggering. Each of those ways has a multitude of ways of representing the same signal. Plus, adding a radio to each sensor adds up to a lot of power quickly. The main advantage of instrumenting the sensors out-of-the-box is I no longer have to trust that the user can figure out how to do that on their own, which could be an engineering challenge itself.
I could also provide a variety of microcontrollers and microcomputers (IE, RasPi) that work out-of-the-box. This would drag me into custom PCB manufacture, and force me to choose which development environments I wanted to force on my users. On the flipside, I can simply log every pin on the controller, and be done with all of my logging needs. Additionally, there is only one radio connection to worry about, not five. As my main concern is with easy development and debugging, I really do not want to force my users to use a particular development environment; it just feels wrong.
What Bret wants to be able to do is take a variable or constant, and automatically find the best value for it to do a particular thing. So, if we wanted a light-seeking robot, what sensitivity should we use on the light sensor? If we're making a remote-controlled rover, how big should the deadzone on the motors be?
There are two things we need to be able to do in order to implement this. The table needs to identify constants in code, and recompile and download the code. In order to do those, we have to combine the debug environment with the development environment.
Up until this section, there's always been a robot between the development environment (presumably a laptop), and the debug environment (the table). You couldn't do anything with the table if you didn't have a mechanically working robot. This means the debug environment had no idea what the code was.
Therefore, automatic experimentation is inherently low on the priority list. I can't force people to work with Arduino if they need a RasPi,...
Read more »