A terminal requires a 'new line' command. Therefore, I need some way to move all the characters up one line. The conventional way to do this is to keep a circular array of all the characters on the screen, and whenever a new line character is sent (or pressed), I would redraw the entire screen.
No, wait, scratch that. The conventional way to do this is by selecting a display with a hardware scrolling function.
Alternatively, I could keep the entire display in a frame buffer and write it out constantly, shifting up one character line whenever there's a new line command.
The math: Keeping the contents of the display in memory as characters requires at least 1840 bytes (80 characters by 23 lines, because we don't really care about the top line of characters.) Keeping the entire display in memory as a bitmap requires at least 48000 bytes (800x480 at 1bpp, with some bitstuffing). Both of these options are onerous, so here's a way to scroll an entire display in 58 bytes.
This is a demonstration of reading from the graphic RAM on the display. This display, with an NT35510 controller, has a significant amount of RAM, which the controller sends to the TFT panel. To write to the display, you don't write pixels, you write to the graphic RAM, with the controller doing all the heavy lifting.
The NT35510 controller has a command to read this G-RAM, which allows the microcontroller to 'see' what's on the display. If you can do that, it's a simple matter to scroll the display: simply read one column of pixels, and write it out again, shifting it up one character line. Therefore, an entire screen can be scrolled with just 460 bytes, as the display is 480 pixels tall, and we don't care about the top 20 pixels. With bit stuffing, the entire scroll function can be done using 58 bytes of a microcontrollers' data RAM.
The ability to access the G-RAM of the display opens up a lot of possibilities. For example, it is now possible to code Conway's Game of Life for this display. The naive implementation of GoL needs two framebuffers, or for this display 96kB -- one for the current display, and another to calculate the next generation. With this technique, I can code GoL in just a hundred or so bytes by iterating over the display like a few demoscenes. This technique can also be applied to other cellular automata.
Alternatively -- and I'm just spit balling here -- the ability to read and write to a very large bit of RAM means the display + microcontroller combo can become a Turing machine, with all the inner machinations completely visible. I'm not saying I'm going to program a Turing machine in this display, I'm just saying it's possible.
I would like to take this oppurtunity to say I am not aware of another graphics library that supports reading from G-RAM. The Adafruit GFX library doesn't support reading from G-RAM, the Arduino library doesn't, and the GLCD library doesn't. This technique is uncommon enough that it merits a Hackaday post (in-link).