So simply modifying the buffer before sending it worked? Awesome!
It's really a PITA to get the timings correct with the compiler doing weird things with the code.
Lucky this works... Is it possible to get a __delay_cycles(n) macro like in AVR architecture or a interrupt-timer-based implementation which will still work even when the compiler decides to optimize in another way than it does now?
flokli wrote:Is it possible to get a __delay_cycles(n) macro like in AVR architecture or a interrupt-timer-based implementation which will still work even when the compiler decides to optimize in another way than it does now?
I think os_delay_us(n); works fine.