Porting Mozzi to ESP8266 - need help understanding timers
Posted: Mon Feb 26, 2018 5:07 am
Hello!
So, I'm in the middle of porting the Mozzi sound synthesis library (http://sensorium.github.io/Mozzi/) to the ESP8266. That's coming along, nicely. I have all the basics implemented and working, but now it's time to worry about the fine points. I'm a newbie to the ESP8266, but not to microprocessors in general.
My present concern revolves around making sure that audio output is being pushed to the output pins, continuously, even while the processing load may be high. Presently my approach is to use "pulse density modulation" to convert the audio samples into (pseudo) analog voltages, and push them out to GPIO2 via Serial1.write(). This works remarkably well, and has the advantages that it works without any additional hardware (well, besides a current limiting resistor, and in-ear headphones), while using only a single pin (I'm using it on an ESP-01!). However it also means that I need to feed data into Serial1 at a relatively high rate. To be precise, the requirement is at least 4 bytes, preferably 8 bytes per sample, and at least 16kHz, preferably 40+kHz sample frequency. Now Serial1 can easily handle even the upper end of this (note that we do not need any sort of reliable data transmission, essentially; the point is to approximate an analog signal, anyway), but, as far as I understand, it only comes with 128 bytes of FIFO buffer. Dividing that buffer by 8 bytes per sample means that I can buffer at most 16 samples, so I will need to feed the buffer at least once per ms, and preferably much more often.
(I am aware that other existing solutions use either an external DAC or PDM output via I2S; here, the buffer is a bit larger, and so the problem is less pronounced, but similar).
So to the question: Processing power is not a major concern, but I need to make sure that my output buffer feeding code is called at a low latency. On another MCU, I'd simply set up a timer at the required rate, and feed the output buffer in the timer ISR. That ISR will automatically take priority over any code running inside loop(), interrupting it whenever necessary.
On ESP8266 that does not seem to be so easy, and I need some help understanding my options. First, can anybody confirm that my following understanding of os_timer and Ticker is correct (or correct me)?
Now, another option seems to be using timer0 or timer1, where AFAIU, using timer0 will mean no WiFi, so essentially, only timer1 is free to use. These timers have higher resolution. But do they, too, rely on yield()s inside loop(), or are these, finally true interrupts, that will - well - interrupt loop() as required?
If frequent enough yield()s are an absolute necessity, can I just skip the timers, entirely, and attach a function to always be called from yield()? Note that I do not need any specific timeout period. I just need to feed the buffer at least n times per ms.
Alternatively, can I somehow attach an interrupt to "Serial1 TX buffer is about to run low" or a similar condition, can I increase the TX buffer size, can I set up a DMA transfer to Serial1 TX?
Thanks for your answers!
So, I'm in the middle of porting the Mozzi sound synthesis library (http://sensorium.github.io/Mozzi/) to the ESP8266. That's coming along, nicely. I have all the basics implemented and working, but now it's time to worry about the fine points. I'm a newbie to the ESP8266, but not to microprocessors in general.
My present concern revolves around making sure that audio output is being pushed to the output pins, continuously, even while the processing load may be high. Presently my approach is to use "pulse density modulation" to convert the audio samples into (pseudo) analog voltages, and push them out to GPIO2 via Serial1.write(). This works remarkably well, and has the advantages that it works without any additional hardware (well, besides a current limiting resistor, and in-ear headphones), while using only a single pin (I'm using it on an ESP-01!). However it also means that I need to feed data into Serial1 at a relatively high rate. To be precise, the requirement is at least 4 bytes, preferably 8 bytes per sample, and at least 16kHz, preferably 40+kHz sample frequency. Now Serial1 can easily handle even the upper end of this (note that we do not need any sort of reliable data transmission, essentially; the point is to approximate an analog signal, anyway), but, as far as I understand, it only comes with 128 bytes of FIFO buffer. Dividing that buffer by 8 bytes per sample means that I can buffer at most 16 samples, so I will need to feed the buffer at least once per ms, and preferably much more often.
(I am aware that other existing solutions use either an external DAC or PDM output via I2S; here, the buffer is a bit larger, and so the problem is less pronounced, but similar).
So to the question: Processing power is not a major concern, but I need to make sure that my output buffer feeding code is called at a low latency. On another MCU, I'd simply set up a timer at the required rate, and feed the output buffer in the timer ISR. That ISR will automatically take priority over any code running inside loop(), interrupting it whenever necessary.
On ESP8266 that does not seem to be so easy, and I need some help understanding my options. First, can anybody confirm that my following understanding of os_timer and Ticker is correct (or correct me)?
- os_timer, and the Ticker-class, are operating in the main context, i.e. essentially inside loop().
- They rely on the code inside loop() to yield() often enough.
- They cannot run concurrently: While one of the timer function is busy, another cannot fire.
- They have a minimum timeout of 1ms, and going this low may even cause trouble with WiFi.
Now, another option seems to be using timer0 or timer1, where AFAIU, using timer0 will mean no WiFi, so essentially, only timer1 is free to use. These timers have higher resolution. But do they, too, rely on yield()s inside loop(), or are these, finally true interrupts, that will - well - interrupt loop() as required?
If frequent enough yield()s are an absolute necessity, can I just skip the timers, entirely, and attach a function to always be called from yield()? Note that I do not need any specific timeout period. I just need to feed the buffer at least n times per ms.
Alternatively, can I somehow attach an interrupt to "Serial1 TX buffer is about to run low" or a similar condition, can I increase the TX buffer size, can I set up a DMA transfer to Serial1 TX?
Thanks for your answers!