Collection of small hardware and/or software projects not worthy enough for dedicated project pages, but interesting enough to share!
To make the experience fit your profile, pick a username and tell us what interests you.
We found and based on your interests.
Before I begin, I need to point out that this log follows after My Introduction to Linear Programming and GPLK. So, any explanation having to do with why I'm adding logs on flow network problems and references to the topics, I will either skim over or entirely avoid here! But, the relevant take away from the previous log is that I've been teaching myself the many ways Maximum Flow ( MFP ) and related problems can be solved, which lead me to the Linear Programming ( LP ) optimization with GNU MathProg ( GMPL ) and its C API, the GNU Linear Programming Kit ( GLPK ).
However, even though I find GMPL to have the most intuitive syntax, I find there are a lot of limitations with GMPL! The GLPK API is nice in that I get to use the language I'm most familiar with, but the API forces the user to always formulate their problem in the standard LP form. This leads me to the focus of this post.
The Idea: Learn how to solve Maximum Flow Problems in MATLAB and LEMON ( C++ )
The tools that sparked my interest the most are MATLAB, LEMON, and Boost Graph Library ( BGL ). MATLAB, to me is the most convenient language / development environment when it comes to reducing the time it takes to write the code! Heck, it has carried me throughput my years as an undergraduate and also as a graduate. But, I actually have more experience in C++, and so I decided to look into two possible graphing libraries. From what I can gather, two of the most well known graphing libraries are LEMON and BGL, and for a moment I was really tempted to go with the latter. But, a comment on stack overflow suggested I may run into issues with the documentation. I should point out that comment was posted back in 2011. So, there's a good chance BGL is better documented today. In the end I decided to learn LEMON over BGL, based primarily on how intuitive the libraries are to understand.
First and Second Implementations: MCFP-CF Solved with MATLAB and LEMON
Basically spent part of last week and a lot of the weekend reading through documentation and implementing my own examples, for both MATLAB and LEMON. In this log, I present three of those examples. The first two solve the same Concurrent-Flow variant of the Multi-Commodity Flow Problem ( MCFP-CF ) with LP, as was done in the previous log. However, in this log I do a MATLAB and C++ implementation.
For those who don't already know, MATLAB allows for dynamic access to the members of a structure. Effectively, this is how MATLAB implements associative arrays ( e.g. Python Dictionaries, C++ map class, etc. ). Not sure of its performance, but it can make very nasty looking code look a lot less nasty. Unfortunately, the first time I wrote the MATLAB implementation, I COMPLETELY forgot MATLAB could do this, and as a result, ended up with so many undesirable for-loops, something to be avoided in MATLAB. The second revision, whose MATLAB source code can be found here, is a lot cleaner! Granted, it probably runs poorly since I focused on making the code readable rather than efficient.
The C++ implementation can be found here. I should point, LEMON doesn't do any of the LP itself. Instead, it interfaces to other solvers. In my case, it just defaults to GLPK! Compared to the MATLAB implementation, the C++ solution is a lot longer in terms of number of lines, excluding comments and declarations! And, even after becoming familiar with the LEMON library, it definitely took more time to write the C++ implementation! But, C++/C solutions seem more prevalent in NoC-related research ( that I've looked at ), probably because of the gains in performance. The C++ implementation is also more accessible to people, since anyone can freely download the compilation tools and the required libraries.
Third Implementation: MCFP-CF Solved with Iterative Solution with LEMON
This final example solves the same MCFP-CF, but with a slightly different approach. The idea behind algorithm I actually got from this dissertation. Before I jump into any details, the full source...
Read more »( An observant reader noticed an inconsistency with the MCFP-CF model! I modified the GMPL code on GitHub and the snippet presented in this log, but I will later modify the text in this log in order to reflect the changes made the to code! )
So, I can already foresee the "Embedded Systems" project becoming a dump for anything and everything related to engineering. I sort of want a place where I can post small blogs on things I find interesting, and HACKADAY just seems like the ideal place. Plus, I find it really helps to understand an idea by explaining it in your own words. Now on to what this log is supposed to be about!
The Idea: Learning Linear Programming and GMPL
Before I elaborate on the idea behind this log, I first want to point out why I am posting this particular log, and also explain some of my long term goals. At some point, I aim to create a project page, detailing the research project I have been working on for my degree. Not going to explain any specifics in this post, but a crucial facet of the work is to implement an on-chip distributed network, optimized for satisfying real-time constraints ( for reasons that will be explained in the future ). The pertinent research area related to that facet is Network-on-Chip ( NoC ), particularly the area having to do with satisfying Guaranteed Services ( GS ). A good introduction to the area can be found here!
However, my background is in Computer Architecture and Embedded Systems! Although I'm not new to the idea of NoCs ( effectively, let's develop on-chip communications to utilize switching networks, such as that on which the Internet depends ), I still have a lot to learn regarding the theory, specifically the area having to do with circuit switching and GS quality of service. More importantly, I need to learn the tools needed to understand a lot of that theory, which brings me to a somewhat new, though fascinating area of mathematical optimization: Linear Programming ( LP ).
From my basic understanding, LP problems are always defined as the optimization ( either maximizing or minimizing ) an objective function with respect to a set of constraints, which are typically linear inequalities but can include equalities.
Actually, I'm not sure whether or not it's okay to count equalities as legitimate constraints, since every source I've checked defines the constraints as a bunch of inequalities. In spite of those sources, though, there does appear to be a lot of problems for which an equality is needed. So I'll go ahead assume equalities are somehow implied!
( until someone much smarter than I decides to yell at me, of course! )
In addition to learning the fundamentals of LP, I want to also test my understanding through trying to recreate the results of various LP problems related to my application. The tool that appears to be most common for modeling and solving LP problems is the GNU Mathematical Programming Language ( GMPL ) ( which is partially derived from AMPL ). I was initially debating whether or not to use MATLAB's LP tools. For now I will stick with GMPL since, from various online postings, it seems the general consensus is that GMPL/GLPK runs comparatively faster and is more common for my application.
The Theory: GLPK API and Multi-Commodity Flow Problem.
I decided to first get practice with both the GLPK API ( C/C++ interface for GMPL ), as well as GMPL. And, as it turns out, programming with the GLPK API is a TEDIOUS experience; many function calls are needed to implement the simplest of models. Worst of all, the API doesn't appear to have any way using set theory, other than importing a GMPL model!
This leads me to the "project" I want to present! As a way to learn how bandwidth across a network can be allocated according to the demands of the network, I decided to implement the Multi-Commodity Flow Problem ( MCFP ), optimized for Concurrent Flow ( CF )! ...
Read more »I briefly mention this in the video itself, but this project will play a small role in a collaborative project I am planning to do with fellow HACKER @Neale Estrella. So, I won't discuss much detail in this log.
Okay, so this project has very little to do with Linux. In fact, is has nothing to do with Linux, or Zynq! I’m actually planning to change the format of this “project” ( or perhaps the project’s format has already been changed, depending on when you are reading this log ). I regularly do smaller projects as a way to focus on learning something knew. And, since I don’t want to end up adding a million different projects, this project page will be changed to include any sort of small embedded project! From this point forward, the individual projects under this project page will be added as separate logs!
On to what this log is really about…
An important component in virtually any computer system is Direct Memory Access ( DMA ), how a peripheral can directly access main memory without the need to depend on a host device. Clearly, a very useful and practically necessary component. For my applications using Xilinx FPGAs, external memory access with the Xilinx AXI DMA is the fastest ( and simplest ) method compared to alternative solutions, for instance directly implementing a full master AXI interface ( excuse the redundancy )! So, I regularly use the DMA core so that my cores can access external memory. But, before the completion of this project, I would only use the AXI DMA to perform simple transfers, not relying on the more advanced, though more useful features.
Not much theory on SG operations, really. I will talk a bit more about my experience with SG in the implementation section of this log.
As briefly mentioned, the Box Muller algorithm is a method for generating normally distributed samples from uniformly distributed samples. There are two basic forms of the algorithm, the first of which relies on trigonometric operations, which are costly operations. The second form, referred to as the polar form, eliminates the trigonometric operations completely. I will only show the algorithm for the polar form here since this is the algorithm that is actually used.
Pretty simple, right? With HLS, it’s almost trivial to implement! I will discuss a bit more about the implementation in the next section!
The Implementation: Generate a Tone and Make Some Noise!
All the project does is generate a single tone and add normally distributed noise to the signal. The volume of the tone and noise can be changed with the pushbuttons, and the tone and the signal can be toggled altogether with two slide switches. The target hardware includes the Digilent Nexys 4 DDR development board, which contains a Xilinx Artix-7 FPGA. Connected to the board is a Digilent Speaker Peripheral Module, which uses an amplifier that can be driven by PWM.
The general flow of the system is a follows. The host device, the soft processor MicroBlaze, first configures all the peripherals in the system. Through the DMA, a CORDIC core is then used to generate the samples of the sinusoid which represents the tone. Once the samples are acquired, several vectored SG write operations are configured to ensure the DMA will continuously feed the sinusoidal samples to the Out Channel.
So, my main challenge in this project can be boiled down to misunderstanding how the SG...
Please also check out the video another an a separate but related project! Finally got myself a microphone to connect to the Zybo's audio codec, and as an added bonus, started to use High Level Synthesis for the first
Create an account to leave a comment. Already have an account? Log In.
Become a member to follow this project and never miss any updates