STRAY THOUGHTS ON EMBEDDED DEVELOPMENT

I’ve learned from the wonderful book Embedded Software Primer by David Simon that there are – generally speaking – 3 classes of embedded system architectures:

1) Infinite loop with round-robin calls
2) Infinite loop with round-robin with interrupts.
3) Function-queue-scheduling
3) RTOS

When starting a project attempt to identify which of the above best fits your requirements.


Embedded development if focused on speed. One optimization I found for the von Neuman architecture that decreases latency when doing signal processing is to store calibration values that are used often in RAM. This is not only faster than dereferencing pointers to FLASH memory but it also keeps the bus clear for code retrieval.


An important interface between the electrical engineer and embedded software developer is a a table (updated regularly) containing all DAQ inputs into the uController. This table should have the following columns:
– uController pin #
– uController port and line (if available)
– type of pin (DI, DO, AI, AO, other)
– a verbose description

The description should contain the mapping of the measured/controlled parameter if AI/AO for example:
[0V; 3.3V] -> [0ADC-4096ADC] (12bit) -> [-120mA; 120mA]
for a current sensing AI pin.

Also speaking of DAQ I find it very useful to have a header file that contains only the definition of all the uController pins.


As embedded hardware are often very close to the physical world I strongly recommend including measurement units as postfixes for all variables that can be measured (mA; A; kPa etc)


For all calibration parameters keep a table in the design spec with the following columns:
– max, min values
– default value
– minimum increment value.

This is valuable during validation.


Here is a list of “tricky” issues that I encountered while dealing with data acquisition development:

ADC ghosting
Many AI ports are multiplexed over the same ADC module. So when iterating over reading multiple inputs previous readings introduce a bias in the current reading.

One way in which I dealt with this was to intersperse the reading of an AI that was set to GND for every other AI reading.

ADC conversion current
When an analog signal is converted into a digital reading a small (yet significant) amount of current is pulled into the uController pin. If the electrical design has any resistances in series with the analog input pin the voltage read on the pin will be offset by the voltage drop developed due to the conversion current.

This is more of a electrical design issue but ensure that all inputs in the uController are properly buffered before reaching the uController.

Aliasing
There’s a few ways to determine if aliasing occurs. One easy way is to disable and enable the ADC clock at random intervals (cannot be multiple of clock frequency) and observe if the ADC value changes. If it does most likely aliasing is occurring. As far as I can tell aliasing cannot be addressed in software. A low pass filter needs to be added as close as possible to the ADC line into the uController pin.

STRAY THOUGHTS ON APP DEVELOPMENT

I found the following in an e-mail I sent to a friend. It contains my raw thoughts about what to consider when developing an app. I brushed it up here and there and then posted it for my future reference.


Identify what the application needs to do (start with user requirements, derive functional specifications and maybe some test cases to gauge if you satisfy the user requirements).

Define major modules (i.e. database interface, other hardware the app touches, com protocols, state machines used, error handling, data logging, are u using a model view controller pattern for the ui? etc)

Define data model (i.e. what the tables in a relational database are, superkeys, XML trees, whatever u use to store data, also include some kind of versioning system for your data model… It’s going to come in handy later on.)

More recently this step has changed somewhat with the advances of the noSQL databases. It is still very important to take a first shot at defining a schema for your database.

Define “business logic” (start from the inputs and outputs of the system and work inwards, do u have any constraints on the data manipulation, once defined what are the constraints the system imposes on the users)

Design UI (umm… I guess there’s way too much to talk about here)

Somewhere in between UI and data model there should be a well defined IO Protocol (i.e. this kind of data comes from the user and this comes from the database. Then the user can only see this kind of data in this particular format)

Resource management (how fast do you want you app to respond to UI input? Maybe u need to use some threads to make it feel smooth – if that’s the case try to define what must stay on the main thread and what can be spun off in secondary threads, if you are on an embedded architecture start thinking about interrupt priority, how big can the stack be, are you running out of RAM or FLASH? Even on a desktop u probably don’t wanna load a 1GiB data set and play with it in RAM.)

Security/networking (is this accessible over the intertubes? Can the data set be compromised by some malefic spirit? More networking related – what happens if the internets are not available – do you have elegant degradation of service or does the app blow in your face?)

Error processing and failure recovery. Where do u deal with the errors? Do you have a singleton that you pump errors to and that then becomes the decision nexus for what needs to happen next (I don’t like this as it becomes a highly coupled element in the design – but sometimes it is needed) or do u deal with errors locally (this sometimes still requires a global Error object that stores all kind of error flags that needs to be checked upon by different modules of the app). Early on my UI didn’t have consistent error conventions (i.e. a well defined error code structure) – and that made me feel kind of silly. Make sure you know which errors you detect and report without doing anything versus detect and try to correct. Are there any errors you can foresee (ie user input that’s contradictory)?

Scalability (aha… This bit me quiet hard…) This is where you wanna make sure you decouple modules and have nice defined interfaces. Keep your data models independent from your data manipulation code. Use polymorphism to deal with different versions. Which modules are most likely to change? Throughout all this process remember that decoupling shifts the complexity from local to global.

I’m kind of done… This is raw brain dump and is all typed on iPhone so pardon the spelling and coherency.

I would probably just come up with a design and then subject it to an analysis that focuses on the areas above. Then implement some of it and repeat until satisfied.

PHASE LOCK THEORY

Here I will present the notes I made while developing the phase lock algorithm used to detect trace amounts of DNA orbiting an electrical focal point.

It took me some time to wrap my mind intimately enough around the phase lock concept to be able to code it simply into an application. I am going to lay down my thoughts here for my future self to reference.

Phase Locking allows me to hone in on a signal from a noisy source by eliminating all other components of the noisy source except for a known frequency at a known phase. I know the frequency and phase because I either generate the signal or whoever is generating it passes on this information to me. I found that what helped the most was the following mathematical presentation.

Let:

(1)   \begin{equation*} $S_{enc} = A_{enc} \times sin(\omega_{enc} \times t + \theta_{enc})$ \end{equation*}

be an Encoded Signal. S_{enc} is composed of the carrier wave and the Signal of Interest I want to transmit across the medium. In this case the amplitude of S_{enc} varies and is proportional to the Signal of Interest.
Let:

(2)   \begin{equation*} $S_{bkgN} = \sum_{k=1}^n A_k \times sin(\omega_k \times t + \theta_k)$ \end{equation*}

be the Background Noise that accompanies the Encoded Signal. In this case the sum indicates that the profile of the Background Noise spans many frequencies at different amplitudes and phase offsets.

The signal that I receive and that I am going to apply the Phase Lock procedure on contains both the Encoded Signal and the Background Noise:

(3)   \begin{equation*} \begin{split} S_T &= S_{enc} + S_{bkgN}\\ & =A_{enc} \times sin(\omega_{enc} \times t + \theta_{enc}) + \\ &\phantom{=}\, +\sum_{k=1}^n A_k \times sin(\omega_k \times t + \theta_k) \end{split} \end{equation*}

Here is how I imagine the signal of interest, carrier wave and channel noise are combined to result in what gets received by the sensing module:PhaseLockInputV0.2

At this point I am going to delve into a bit of signal processing. My end goal is to extract the amplitude A_{enc} from the noisy S_T. For this I will use my knowledge that the Encoded Signal S_{enc} is broadcast at a frequency \omega_{enc} with a phase offset \theta_{enc}. The procedure is the following:

1) Multiply S_T by a reference signal S_{ref}. In this case S_{ref} has has the same frequency and phase as S_{enc}.
2) Convolve the result by a constant function (this can also be interpreted as integrating the result).

To understand what the two steps above do I am going to present their effects both in time and frequency domain.

Remember: CONVOLUTION in time domain is MULTIPLICATION in frequency domain. And the other way around.

The outcome of the first step (the S_T \times S_{ref} in the time domain) in the frequency domain is to shift the frequency \omega_{enc} of my Encoded Signal to DC (0 Hz).

The second step (the convolution of the result of step 1 with a DC function in the time domain) results in all frequencies except DC being multiplied by 0 – practically leaving only the power component of the Encoded Signal.

If you manage to understand the sequence of steps above the math below will make much more sense.

Now that I have a good intuition on what I am planning on doing I’ll go through the math in time domain to make sure I get all the fine details right.

So first up is the multiplication – which is also known as “beating” S_T with a reference signal S_{ref}.

Let:

(4)   \begin{equation*} S_{ref}=sin(\omega_{ref} \times t + \theta_{ref}) \end{equation*}

Then:

(5)   \begin{equation*} \begin{split} S_T \times S_{ref} &= [A_{enc} \times sin(\omega_{enc} \times t + \theta_{enc})] \times sin(\omega_{ref} \times t + \theta_{ref}) +\\ &\phantom{=}\, +[\sum_{k=1}^n A_k \times sin(\omega_k \times t + \theta_k)] \times sin(\omega_{enc} \times t + \theta_{ref}) \end{split} \end{equation*}

Remembering from whatever is left of my trigonometry memories that:

(6)   \begin{equation*} sin(u) \times sin(v) = \frac{cos(u-v) - cos(u+v)}{2} \end{equation*}

The top term of (5) simplifies to:

(7)   \begin{equation*} \begin{split} \frac{A_{enc}}{2} \times [cos(\omega_{enc} \times t + \theta_{enc} - \omega_{ref} \times t - \theta_{ref}) - \\ - cos(\omega_{enc} \times t + \theta_{enc} + \omega_{enc} \times t + \theta_{ref})] \end{split} \end{equation*}

Now remember from the way I chose S_{ref} that \omega_{enc}=\omega_{ref} and that \theta_{enc}=\theta_{ref} so the first cos() becomes cos(0)=1 which is a DC (non-oscilating) term. So the top term ends up being:

(8)   \begin{equation*} \frac{A_{enc}}{2} \times [1 - cos(2 \times ( \omega_{enc} \times t + \times \theta_{enc}))] \end{equation*}

The bottom term of (5) becomes:

(9)   \begin{equation*} \begin{split} \sum_{k=1}^n \frac{A_k}{2} \times [cos(\omega_k \times t + \theta_k - \omega_{ref} \times t - \theta_{ref}) - \\ - cos(\omega_k \times t + \theta_k + \omega_{ref} \times t + \theta_{ref})] \end{split} \end{equation*}

In this case however we can’t cancel out \omega_k \times t with \omega_{ref} \times t unless \omega_k is equal to \omega_{ref} and \theta_k = \theta_{ref}. This happens for Background Noise frequencies \omega_k that are “close” to the Encoded Signal \omega_{enc}=\omega_{ref} frequency and at about the same phase offset \theta_k = \theta_{enc} = \theta_{ref}.

Putting everything together equation (5) can then be simplified to:

(10)   \begin{equation*} \begin{split} S_T \times S_{ref} &= \frac{A_{enc}}{2} \times [1 -\\ &\phantom{=}\, - \frac{A_{enc}}{2} \times cos(2 \times (\omega_{enc} \times t + \theta_{enc})) + \\ &\phantom{=}\, + \frac{A_k}{2} \times \sum_{k=1}^n cos[(\omega_k - \omega_{ref}) \times t + \theta_k - \theta_{ref}] - \\ &\phantom{=}\, - \sum_{k=1}^n cos[(\omega_k + \omega_{ref}) \times t + \theta_k + \theta_{ref}] \end{split} \end{equation*}

Upon a slightly more careful inspection I observe that the result of beating S_T with S_{ref} is a DC term and a series of oscillatory terms at different frequencies.

NOTE: Actually there’s two DC terms. The 2nd DC term is hidden in the cos[(\omega_k - \omega_{ref}) \times t + \theta_k -\theta_{ref}] term. This happens as mentioned before when \omega_k is about the same frequency as \omega_{enc}=\omega_{ref} and when \theta_k is equal to \theta_{enc} = \theta_{ref}.

So if I were to filter the result of S_T \times S_{ref} with a DC filter I would remove all the oscillating terms. To filter using a DC term I convolve the multiplication by a constant term which leaves me with:

(11)   \begin{equation*} [ S_T \times S_{ref} ] * 1 = \frac{A_{enc}}{2} \times 1 + \frac{A_{k0}}{2} \times 1 \end{equation*}

where the _{k0} term is used to indicate the frequency \omega_{k0} for which \omega_{k0} = \omega_{enc} and \theta_{k0} = \theta_{enc}. That is the component of the Background Noise that has a frequency and phase equal (or very close to) the Encoded Signal’s frequency and phase.

If we assume the Background Noise at the S_{ref} frequency and offset is small. Then:

(12)   \begin{equation*} [S_T \times S_{ref} ] * 1 = \frac{A_{enc}}{2} \end{equation*}

Which means that:

(13)   \begin{equation*} A_{enc} = \frac {[S_T \times S_{ref}] *1}{2} \end{equation*}

which brings us to the goal of finding A_{enc}.

Thanks to the people at tex.stackexchange for teaching me how to use Latex properly.

The biggest thanks go to The Scientist and Engineer’s Guide to Digital Signal Processing by Steven W. Smith, Ph.D for teaching me the intro knowledge on convolution.