Close
0%
0%

The Human Connection : 1st Impression

What do the oceans under the ice of Europa sound like? Can we communicate with whatever is down there?

Similar projects worth following
A fundamental transformation of human perception and synthesis of sound inspired by Star Trek IV..... What do the oceans under the ice of Europa sound like? Can we communicate with whatever is down there?

Sounds crazy doesn't it? Sometimes the "big" ideas need many years to come to fruition. A mission to Europa *will* happen. If there is is water under the ice, there will be much to discover. We will need tools to figure it all out once we get there. For now, there is much to do here on earth to prepare.!

The Human Connection - 1st Impression

What do the oceans under the ice of Europa sound like?  The Europa mission is the "big" vision. What is under the ice? What kinds of sounds are down there? Who is making those sounds and can we communication with those beings?

 [Inspired by Star Trek IV - The voyage Home]

Prologue

“Connected” is often a misunderstood and misused requirement in design. Adding a network port to a machine does not make it connected. That is like putting clothing on a monkey and calling it human. True connection means adding pathways between independent system which allows the “network” to become truly more than the sum of its parts. The independent systems in a “connected” project become more than what they could ever be alone because of the connection to the other.

As an example, Humans seek the internet as it allows connection. Not because the internet can allow data traversal but because we are extended and transformed by pathways to other independent entities on the network. It allows each of us, our knowledge, our thoughts, and emotions to be bridged. While this connection is at still at a somewhat primitive state, one can make connection with other information, personalities, thoughts and emotions that allow us to be more than what we were without. I believe that this is what draws people to “the cloud”. We have a connection to the world that we simply did not have before.

We currently stand on a precipice of fundamental human transformation. We have the technology to extend our being into the universe to sense and understand to greater depths than what our biology allows. We have the ability micro-machine and micro-etch physical structures on that scales of billions of a meter. We have the ability to program these items with abstract rulesets. We have the ability to measure virtually all known physical quantities. These technologies online are cheaper than ever and will allow virtually anyone to contribute. Combine this fact with the capability of connecting human thought across a geographically dispersed area, we now have the raw materials to make truly remarkable advances in the human condition. With open platforms, we will be able to overcome biological limitations and push into both the world of the small and the universe at large.

The Human Connection :1st impression is an experiment in developing open technologies that leverage that latest in low cost, state of the art electronics to extend human connection and perception into the universe in which we coexist.   This concept can span many disciplines including biology, physics, acoustics, electronics, engineering, etc. For the Hackaday prize this project we will look at extending human interaction with fluidic oscillations (AKA sound).

Please start by looking and the 8-pages of “conceptual” drawings for the 1st impression (See Part 1 below). You will see that we will synthesize concepts in acoustics, electronics, mathematics (signal processing) and embedded systems in this 1st experiment. The ideas presented here will be the starting point.

The next step will be the video introduction and then the actual documentation of the design process. The design process will be open to all. All software, design drawings, schematics, etc. will be available via Github. Instructional videos will be available on Youtube. All of the electronics will use low cost development tools (i.e. a $20 LPC-link or $35 FRDM-64F for high end signal processing, OSH Park for PCBs, etc) as much as possible. Everything will be available for anyone to learn, hack and extend.

Part 1 : What exactly is the Human Connection - 1st Impression?

I always loved the Star Trek Movies. The concepts of molecule size microprocessor (Star Trek I), planet forming (Star Trek II) and biological communication (Star Trek IV) are “big picture” concepts that keep people dreaming. There is a scene in Start Trek...

Read more »

  • 2 × http://www.lpcware.com/lpclink2 One LPC Link 2 as the debugger, the other as the main dev board.

  • Schematic / PCB for LPC-LINK-2_SHIELD

    ehughes08/25/2014 at 15:13 0 comments

    I got a starter project for the LPC-LINK2_SHIELD (will have the mic preamp circuits) on github.   It has the PCB dimensions,  mechanicals, connector positions, etc setup.

  • Microphone Prototypes

    ehughes07/27/2014 at 02:08 0 comments

    Like all good development on the cheap,   I have re-purposed a mic design from another acoustics project.   The mics shown are Knowles Acoustic SPM404H.   While they are tested to 10kHZ in the datasheet,   just about all of the mems products go waay into the ultra sonic.   Using a piezo transducer, I have verified there is good response out to 80KHz.  Now... I don't have a good calibrated source so my crude setup (which has peaks and valleys in the spectrum owm its own) tells me I am in the ball park.   The will server the purpose for the time being.   Schematics will be in Github at some point.....   I wish I didn't have to sleep....


  • Algorithm Discussion : New Video Available

    ehughes07/13/2014 at 01:35 0 comments

    I just recorded a video that expands on the algorithms discussed quickly in the entry video.   There is about 20+ minutes but should give you an idea of where we are going!

  • Core FFT Performance : 4096 Radix4 CFFT

    ehughes07/09/2014 at 14:12 0 comments

    A core part of the algorithms we will use is a complex input FFT (a+jb).    Before going to far I wanted to evaluate the FFT performance of the LPC4370 M4 core.       Now,  an FPGA would rule the roost with FPGA processing horsepower  BUT I am trying to keep this as low cost as possible.   The 4370 on the LPC-Link2 is a place to start.   FPGAs are great once you have everything worked out but HDL can be unforgiving.... (and are high cost!)

    So,  here are is some assumptions:

    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    LPC4370 -  Code running on the M4 core.  Clock rate at 204Mhz.  Exectution from RAMLoc128 (0x10000000 - 0x10020000)

    ARM CMSIS DSP libraries V 4.0.1.   In particular I am looking at the function arm_cfft_radix4_q15

    I am using fixed point processing.

    Input data is a 4096 q15_t array in RAM.   (Note all processing is done in place... source data must be in RAM)

    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    Now,  I am targeting a 200Khz system sample rate with 4096 block size.  (This matches the max radix4 block size allowed by CMSIS DSP).   This means I have a window of 20.48mS to get all my processing done.   In the background,  new ADC data will be DMA's into a buffer and data will be DMA's from an output buffer to a DAC

    So.... drum roll.   The algorithm arm_cfft_radix4_q15  takes 2.4mS.    So, I have roughly a fact of 10 margin.   Now, this will quickly get eaten up.  I have to do a minimum 2 FFTs (forward and reverse transform),  the magically scaling algorithms.   Either way, this gives me a good amount of overhead.    I always have 2 other cores ready to go :-)

    I also profiled arm_cfft_radix2_q15.   It is a bit slower at 2.9mSec.

    Code is in the hc-1 Github repository.

    Last notes:

    The board support library sometimes crashes in Board_SystemInit() at bootup when running from RAM.  I think a delay is need when setting up clock dividers or the crystal.  If I single step through the code,  it works...   Also,   using the internal osc and PLLing up to 204MHz is fine.

    These numbers would certainly get awful if running from SPIFI Flash.  (LPC-4370 is ROM-Less.   You have to bootload from SPIFI flash into RAM or execute from SPIFI...)   Maybe I can do that some other day

  • Algorithms!

    ehughes07/08/2014 at 14:14 0 comments

    Throwing together hardware is the simple part of a project.   The real magic is in the software!     People often forget that the software component is often a black hole of no escape.   I will soon get started on documenting the algorithms that we are going to implement.  After all, this is where all the magic is.

    A key piece is doing some evaluation of CPU performance.  Before I go too much further I need to do some H/W performance tests to get a good idea of what we can get put together in the prototype.  Stay tuned....

  • Microphone Concept

    ehughes07/08/2014 at 14:08 0 comments

    For the prototype I will be using a MEMS type microphone (Knowles Acoustics).   Virtually all of the mems mics will respond into the ultrasonic bands.    I am now characterizing the current design at >20KHz.

    The challenge with high frequency sensors is diffraction.   The wavelength is comparable (or much smaller) that the characteristic dimension of the sensor and can yield "lumpy" response parameters.     In fact, most of the data provided by the vendors (i.e. Knowles) shows this effect!   I am currently building the prototype mic with the parts I have on hand.    This will be a baseline to get started.    We can improve the mic down the road...

  • Entry Video

    ehughes07/04/2014 at 16:46 0 comments

    Entry video has been produced....

    https://www.youtube.com/watch?v=MFvtlzdgVHY

  • Getting Started

    ehughes06/27/2014 at 16:00 0 comments

    Got the project description online.   Added link to github repo where data will be stored.   Make announcements on social media.

View all 8 project logs

  • 1
    Step 1

    Github Repo : https://github.com/ehughes/hc-1

View all instructions

Enjoy this project?

Share

Discussions

Adam Fabio wrote 07/06/2014 at 05:56 point
Interesting project ehughes! I've often wondered what remapping ultrasonic, RF, even visual phenomena (like UV) would give us. Your video really explains what you're working on, and it's a great start. Thanks for entering the Hackaday Prize, and good luck!

  Are you sure? yes | no

ehughes wrote 07/07/2014 at 20:26 point
Adam:

Also consider that I am proposing the inverse operation. Remapping our speech into the other bands. (Quickly discuss in the video.... 2 Minutes is tough!)

More videos to come!

  Are you sure? yes | no

Ranarchy wrote 07/04/2014 at 20:23 point
I had a conference call with him in December about a thingy I worked on about a decade ago: imaging a 120-degree fan of near IR light onto a linear diode array. This would image a narrow strip 120deg wide by about 5deg high onto an array of 128 photodiodes. Arbitrarily assign an audio frequency to each photodiode signal, use each diode's signal as an intensity multiplier. Do an inverse FFT on the resulting waveform to get an audio signal representing objects in the room. You have to do it twice -- once for each ear, and the seperation between the ears is represented in the phase transform you make for the iFFT. Pipe them into their respective ear, and you should hear imaged objects emitting a tone -- the closer they are, the louder the tone. The phase and amplitude difference permits the brain to determine angle to the object... The aerospace company I worked for at the time thought they couldn't make money with it, so they didn't fund me to pursue it. ....went on to other things. Eagleman liked it, and may assign it to his grad students. Haven't heard. Would be glad to give you more information, should you like to pursue it or something similar....

  Are you sure? yes | no

Ranarchy wrote 07/04/2014 at 20:23 point
...actually, it was closer to 20 years ago...

  Are you sure? yes | no

Ranarchy wrote 07/02/2014 at 15:59 point
You got pretty close. ... been done before. Check out Dr. David Eagleman's work at Baylor medical school...

  Are you sure? yes | no

ehughes wrote 07/04/2014 at 16:45 point
Checked him out. Interesting how most of this type come from the biology perspective, not the EE perspective

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates