Flux Capacitor in CPU Pipelines

A module in CPU pipelines that'll end all of the troubles related to the sequential nature of instruction execution, by means of Time Travel

Similar projects worth following
My Science Fiction source is the DeLorean Time Machine

So! Let's consider the unlikely event of an actual Flux Capacitor being invented, tested, manufactured, and even miniaturized to a VLSI level. Bear with me.

I intend to get useful data from the pipeline and send it back in time in order to correct the hazards. I propose two applications with the module shown in the 1st picture:

1) Data Hazards won't be a problem anymore with the arrangement in the 2nd picture, which is an actual Register Forwarding.
2) Control Hazards could be taken care of nicely with a similar technique, shown in the 3rd picture.

This is a "connected" device: It connects to itself in the past (or future); the simulation has a microcontroller connected to a display; computers built this way can connect anyhow.

For a detailed description, read this blog post:

Here's the mandatory video:

==================ATTENTION TIME TRAVELERS================== 


                             PROOF OF CONCEPT OF THIS BEAUTY.

==================ATTENTION TIME TRAVELERS==================

I intend to implement an emulation of a Time-Traveling module for ridding us from the tortures of Pipeline Hazards forever.

The simulation will consist of an LPCXpresso LPC1115 microcontroller simulating a machine with such improvement in its pipeline.

The main purpose is to have this built in future microprocessors.

The simulation will have all of the relevant data shown in a TFT touch sensing display.


The following text is verbatim from the Blog post mentioned in the description :

Flux Capacitors might help prevent pipeline hazards

First off, this is not serious at all. Okay, you've been warned.

Ever wondered about alternative approaches to overcome Pipeline Hazards?

Well, looking at this xkcd schematic I started to wonder about the Flux Capacitor's potential in an actual circuit. I'll tell you what made sense in my head, but first, let me explain the basics. If you already know the basics, you may skip to "Flux Capacitors in Computers".

About Pipelining
It's a way of speeding up code execution by dividing it into several stages that perform sub-tasks simultaneously by means of dedicated hardware for each stage. Four consecutive stages are typically used: Fetching the instruction from memory, Decoding it (identifying the operation),Executing the instruction, and Writing Back the result. This ultimately speeds execution up (ideally in a rate equal to the number of stages in the pipeline). Read more.

Pipelined execution. Numbers shown in the graph are the instructions at the left.

About The Hazards
Hazards are problems related to the use of a pipeline in an instruction execution unit. Two Hazards are considered here:

  • Data Hazards: It happens when an instruction needs data that's produced by another instruction that previously entered the pipeline, but hasn't finished yet. For example let's say instruction i writes data into register A, and instruction i+1 (the next one) uses register A as an operand to calculate something else. What happens when instruction i+1 is in the Execute stage and i is in the Write Back stage? The value of register A used by i+1 as an operand will be the original value prior to the execution of instruction i because it's not done yet. This is called a Read After Write (RAW) or true data depencency. This is a problem because the data used by instruction i+1 is outdated.

Data Hazard between Execute and WriteBack (may happen between other stages)

  • Control Hazards: This concerns branches, calls, interrupts, exceptions, resets, etc. When sequential execution must change due to a branch or whenever the next instruction to fetch is not the next stored instruction, the Fetch stage must be aware of the address of the next instruction (branch target, interrupt vector, etc). Furthermore, most branches are conditional, and it's not until the Execute stage that the processor knows whether or not the branch should be taken. This is a problem because when the processor becomes aware of a branch, all of the instructions previously loaded into the pipeline might have to be aborted (execution may have to jump elsewhere). This abort depends on the condition of the branch, which will be known after Decode. If the branch must be taken, the pipeline must be flushed or bubbled (instructions previously entered replaced by NOPs [no operations]). If not, no problem. Flushing/bubbling the pipeline is a problem because it's a waste of time, and the pipeline is supposed to speed things up.

    Control Hazard with bubble

There are many approaches to overcome these hazards, mainly divided in two kinds of strategies: Static...

Read more »

  • 1 × LPCXpresso LPC1115 Microcontroller board A Cortex-m0 CPU from NXP, on a very nice board by Embedded Artists.
  • 1 × TWR-LCD: Graphical LCD Tower System Module A very nice display for a GUI

View all 5 project logs

Enjoy this project?



Szabolcs Lőrincz wrote 09/02/2014 at 13:11 point
Using quantummechanics and today's technology, it shouldn't be too complicated to send back some bits of information back in time to a couple hundred nanosecs, so I would say this projects will be completely feasible in 5-10 years.

  Are you sure? yes | no

ganzuul wrote 08/22/2014 at 16:46 point
It took me a moment to realize this wasn't a sci-fi contest entry which had wandered astray... I love how you scale down the power requirements from gigawatts to watts. - CPUs are famously hotter than nuclear reactors, so it makes perfect sense.

BTW, I hope you realize time machines are inherently un-patentable. - Someone will always travel back in time and submit the patent before yours as their own. ;)

  Are you sure? yes | no

Eduardo Corpeño wrote 08/25/2014 at 13:45 point
Lol thanks! I guess I'm doomed there.
I've also wondered about the effectiveness of this improvement if running in more than one processor at the same time (suppose it's become a standard). Fixing one computer's problem would certainly affect all others, generating tons of "abandoned" universes.
That would make a hell of a Mutual Exclusion problem!

  Are you sure? yes | no

ehughes wrote 08/21/2014 at 01:51 point
Nice! I'll vote for you!

  Are you sure? yes | no

Eduardo Corpeño wrote 08/21/2014 at 04:29 point
Thanks Eli! I'm following your project too. This is exciting!

  Are you sure? yes | no

Liam Marshall wrote 08/20/2014 at 23:02 point
This is actually *beyond cool* from an algorithmic standpoint.

  Are you sure? yes | no

Eduardo Corpeño wrote 08/21/2014 at 04:31 point
IRK! Thanks for the Skull!

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates