Posted on Leave a comment

CH32V203C8 LED Clock – Part 3

5 April 2025

Now to make the code display the time instead of just “8.8.8.8.” all the time.

6 April 2025

That was a little more difficult than I expected. There wasn’t really that much different when compared to the previous, almost identical project, but I had made a few changes along the way, mostly the naming of the various tables and maps. Dereferencing the segment map was particularly fun. But it works, as far as being a timepiece.

There are some drawbacks to this specific implementation, however. The display is quite dim, overall. I’m still using a 1/32 duty cycle, so it could be brought up a few percentage points in the brilliance category, but I’m not sure it’s worth it. There is also a visible flicker while the RTC interrupt is handled that stops the foreground task from updating the display properly. Maybe I could move that code to a timer interrupt handler instead.

I’m still going to add the time setting buttons and that code as well, as I think this clock is good enough to be used in a dimly lit room. These displays have a date code of “037W”, but I’m not sure which decade it refers. I know I’ve had them for several years, and I suspect some of the newer LED technologies could be more efficient and much brighter in the same application.

I will also add the two little green colon LEDs for completeness. I might even add the temperature display I pondered earlier, but I’m not certain about that, yet. The only other desired feature that I had wanted to implement was the daylight saving option switch.

Again, favoring GPIOB for no particular reason, I have attached the hours and minutes setting push buttons to PB13 and PB14, respectively. Now to add the required code to configure them as inputs with pull-up resistors enabled, then set them up to trigger an external interrupt when pressed.

That was just a bunch of copy/pasting. The only difference was that in this setting, PB13 and PB14 share a single EXTI interrupt, called EXTI15_10, which encompasses EXTI10 through EXTI15. So now I have only a single interrupt enabled and a single interrupt handler. This new handler must now differentiate which of the two, or possibly both, buttons have been pressed and set their corresponding flags for the foreground task to address.

So the buttons are buttoning as I had hoped, but the logic from the last project does not work in this setting. While the previous project had an assistant to handle all the LED scanning and updating tasks (the TM1637 chip), these duties have now been in-sourced. I can’t just have the code stop and wait while the user holds down a button, as the display goes decidedly dark. Also, no detectable “setting” is happening, either. More pondering is indicated.

One way to do this is to implement a state machine. The initial state is when the button (either button) has yet to be pressed. We’ll start out in this state. When the button is pressed, it sets the flag in the interrupt handler, just as we’re doing now. When the foreground task detects that the button has been pressed, the time-of-button-press gets captured and the state advances to the button-is-pressed state. While in this state, the press-elapsed-time is calculated, and if it exceeds the threshold, in this case ~1/2 second, the unit (hour or minute) is incremented. If the unit overflows, it is reset. The time-of-button-press is then re-captured. When the button is released, the state cycles back to yet-to-be-pressed. At no point does the code loop, except for the singular outer loop. Not all that complicated, really.

State
------------------------------
button is not pressed
button has just been pressed
button continues to be pressed

This is only just a little more specific than the previous boolean values of ‘pressed’ or ‘released’. We differentiate the press event to trigger a particular action from the continuously pressed state. Anyway, it works as expected.

Posted on Leave a comment

CH32V203C8 LED Clock – Part 2

4 April 2025

I got all of one more segment wired up, PB1 = segment b, when I discovered that PB2 is not wired to the connector pins of the WeAct Studio BluePill+ board. It is connected to an on-board blue LED via a 1.5KΩ resistor. I already knew about the LED, but assumed it was still brought out to the header pins. I assumed incorrectly.

This is not the show-stopper I thought it was last night. I just need to assign another pin to segment c duty and reconfigure the software. As long as I pick another pin on GPIOB, it will actually be pretty simple to accommodate.

It’s also worth noting that PB2 serves as the BOOT1 input, which controls how the chip boots up. In any case, it was not difficult to “adjust” the software to assign PB12 as the segment c driver pin. Since it also belongs to the same port as all the other pins currently being used, all I had to do was modify the bit mask used to set or clear the driver pins.

I had originally done the traditional read-modify-write cycle when controlling the GPIOB pins, wherein one reads the current state of the pins, clears the bits of interest, sets any required new values, then writes the result back to the output port. Then I remembered that the CH32V family of parts has the same bit-level control over the GPIO ports as the STM32, with which I have worked extensively in the past. You can set or clear individual bits without disturbing the other bits in the port by using the BSHR or BCR registers available on every GPIO port.

This worked well and had the additional benefit of allowing me to specify the segment bit masks as single ‘1’ bits, instead of ‘0’ bits. “Tomato, tomato,” I can hear you say. But just try coding some hexadecimal constants both ways and you tell me which one is both easier to wrote and later read again. I think you will agree with me, eventually.

So now I have a 1/32 duty cycle display with no visible flicker and a modest yet readable intensity. At this duty cycle, I can actually push triple the current through the LEDs, except that we’re nearing the limit (25 mA) of what the individual GPIO pins of the CH32V chip can deliver. And the whole point of this exercise was to see if we could get a reasonable display appearance with an absolute minimum of external components. At the moment, we’re using four (4) current-limiting resistors and no other driver components of any kind. I’m using through-hole axial 1/4W resistors in this prototype, as they are more breadboard friendly. The cost of the surface-mount versions of these components defies the imagination. They are almost completely free of cost, as far as the exchange of currency is required. The last time I bought them, they were less than one tenth of a cent per piece. Cheeep!

Right now the software is just going through a loop, illuminating each segment in turn. I need to add the actual time-keeping code to this project so that I will have a map of segments that can be turned on or off, as needed. That’s not going to be too difficult, I think, as I can just lift most of the code from the previous project and use it here, with minimal changes.

I’m also thinking I can up the apparent brightness of the display by not allocating a slot for all thirty two (32) possible segments. There’s no situation where this clock needs to display “8.8.8.8.” that I can envisage. What I ought to do, but can’t quite figure out a simple way to do it, is to map out all the possible segment patterns for the traditional 12 hour clock of my people and see what the maximum number of simultaneously lit LEDs actually is. Right now I only know it’s less than 32 and greater than zero, so we’ve got it properly bracketed for now. A more precise answer awaits. Anything that reduces the duty cycle will increase the apparent brightness of the display.

One option is to only scan through the display and light up the segments that need it, and not pause for the non-illuminated segments. The problem with this approach is that the more segments need displaying, the dimmer the overall effect. 10:08 is dimmer than 1:11, and that’s just not acceptable to me.

I also need to be thinking about just how wild I want to get with the individual colon LEDs that I have picked out for this prototype.

Posted on Leave a comment

CH32V203C8 LED Clock – Part 1

3 April 2025

Time for another clock build. This time I will be using the CH32V203C8T6, as before, but will be driving the seven segment LED display directly, instead of using the TM1637 module.

For the LED display, I will be using a pair of ROHM LB-602MA2 dual digit seven segment displays with right-hand decimal points. The display color is green. These displays use a common anode configuration, although the individual segments are not tied together. This makes them more versatile, because you can wire up each segment separately or multiplex them, as you see fit. I will be multiplexing them.

I’m also going to try a “single segment” display algorithm to try to reduce component count. Only one of the thirty two (32) segments will be on at a given time. I hope that the resulting display is bright enough to be clearly seen. I can think of one good way to find out!

First, I look up the data sheet, which can be found at:

https://fscdn.rohm.com/en/products/databook/datasheet/opto/led_display/numeric/lb-602ak2-e.pdf

Sadly, this product is no longer recommended for new designs. On a happier note, I have a nice stash of them from my “collector” days. I’m really sort of surprised the data sheet was even still available. Good job, ROHM!

Now I have to assign some of the pins of the -203 chip to drive all these segments. The chip’s data sheet says there are 37 available GPIO pins. I’m using two for USART communication, PA9/TX & PA10/RX, at least through the debugging stage. They won’t really be necessary once everything is working properly. I will need to check the schematic of the WeActStudio BluePill+ board again to see what other pins have already been provisioned.

Port A

    User key:  0
    external flash (not connected):  4, 5, 6, 7
    MCO:  8
    USART1:  9, 10
    USB_DN, USB_DP:  11, 12
    SWDIO, SWCLK:  13, 14

Port B

    D1 (blue LED, active high):  2
        also
    BOOT1 (10 KΩ to ground)

Port C

    OSC32_IN, OSC32_OUT:  14, 15

Port D

    OSC_IN, OSC_OUT:  0, 1

So we can see that GPIO port A is already pretty busy. Ports C and D are 80% utilized with connections to quartz crystals, but they barely had any pins to pin with. That leaves us with GPIO Port B, which at the moment is only committed to LED duty. As I’m only going to be configuring these display driver pins as outputs, the existing LED connection should not create a conflict.

I propose a very simple mapping of segment and digit drivers:

PB0     segment a
PB1     segment b
PB2     segment c + blue LED
PB3     segment d
PB4     segment e
PB5     segment f
PB6     segment g
PB7     segment DP (right hand)

PB8     digit 1 (leftmost)
PB9     digit 2
PB10    digit 3
PB11    digit 4

Since the displays are packaged as dual digits, I can separate the two packages by just a smidge and leave room for a dedicated colon, made of two additional green 3mm LEDs.

All this circuit goodness is starting out on a solderless breadboard. If the proposed “single segment drive” methodology proves viable, I am thinking of committing this design to a printed circuit board (PCB). I’ve just ordered some non-differentiated, breadboard-compatible breakout boards for the LPQF48 package for more elaborate wiring experiments using this chip. They are taking about two weeks to be delivered these days.

While I’m developing the circuit and the firmware, the whole things is being powered by the 3.3V DC power output from the WCH-LinkE programming adapter. That’s also how I’m programming the chip and connecting the USART to the serial console. The final version of the prototype will be powered via the USB-C connector on the BluePill+ board along with its on-board 3.3V regulator.

Partially wiring up the LED array for testing, I get absolutely nothing visible. I even break out my trusty multimeter and verify the signal levels going to the display. All looks correct, except that I am reading zero voltage (and therefore zero current) across the current-limiting resistor. I double check the wiring against the manufacture’s data sheet and find everything is connected as it should be.

Then I realize that I’m thinking about this wrong. This has happened before and I predict that it will happen again. These LED displays are configured with a common anode, and I had assigned the signal levels as if they were the common-cathode configuration. The one thing we know about any diode, and let us not forget that LEDs are “light emitting diodes” is that they conduct current in one direction and one direction only.

So I need to swap everything around in the software. The digit drivers will be active high and the segment drivers will be active low. This works. Now would be a good time to define some constants that represent the different level combinations that I will be using, at least for the initial development stage.

Now when I say, “This works,” I mean that when I write a value of 0x01FE to GPIOB’s output register, the segment a1 (segment a, digit 1) lights up… but just barely. My exceedingly cautious self installed relatively high-value resistors (1KΩ) on each of the digit driver pins for the first part of the experiment. Yes, the digit driver pins, not the segment driver pins. A normal person would have attached a high(er) current driver to the digit commons and put the current-limiting resistors in-line with the segment drivers. Then you could light up all the segments of a single digit at once, momentarily, then select the next digit and enable its expected segment driver pins, rotating through the digits fast enough to give the illusion that they were all lit up simultaneously. It’s an optical illusion, but it’s a good one.

But I’m trying something different here. I’m only going to illuminate a single segment at a time and have a 1/32 duty cycle, instead of a 1/4 duty cycle. These ROHM displays are rated up to 60 mA as long as the pulse width is 1 millisecond long and the duty cycle is 1/5. I am assuming that shorter pulse widths and higher duty cycles are OK.

Now would be a good time to measure the actual current flowing throw segment a1. It’s not much. I measure 10 mV (0.010 volts) across the 1KΩ resistor. Ohm’s Law tells us that the current is the voltage divided by the resistance. In this case, the voltage 0.01V divided by the resistance 1KΩ equals the current 10 μA (microamps). In case you were wondering, that’s not very much current at all. I’m actually surprised it was visible at all in normal room lighting.

Part of this smallness can be attributed to the fact that the -203 is operating at 3.3V and the typical forward voltage of each segment of this display is 2.1V, with the maximum being 2.8V. You have to exceed the diode’s forward voltage to get any current flowing at all, and then it’s an exponential relationship between the voltage and current after that.

If I could drive the LEDs with one of the -00X class chips or even the CH32X035, we could be running at 5V and deliver much more voltage to the LED. The main reason I’m not doing that at the moment is that none of those chips has a real-time clock (RTC) peripheral available, or a 32,768 Hz oscillator built in.

Now that does not rule out the possibility of a -00X based LED module driver that is controlled by a chip with RTC capability. But I already did that with the TM1637 module. And those chips by themselves are not especially expensive, and include all the power driving circuitry. So it doesn’t make a lot of sense to re-invent the wheel in this particular situation.

The maximum current rating of each segment at 100% duty cycle is 20 mA. The total power dissipation for the entire package (16 diodes) is 960 mW, or 60 mW per LED. So it’s time to crank up the power, but still in a responsible manner.

Now I measure 0.929 volts across a 100Ω current-limiting resistor. This represents a segment current of 9.29 mA. A younger me would have predicted a current limiting resistor of one tenth the previous value to produce a current ten times larger. I learned the truth about this when I designed my first LED array, which went on to be called variously the “IR Illuminator” or “IR Spotlight”, depending on where you bought one. It was originally an array of four parallel strings of nine infrared (IR) LEDs each. Later, I redesigned it as a 6×6 array for better power performance. My initial predictions of what size of current-limiting resistor were way off from the truth. A good way to learn about such things, I have found.

Now I think we can bump up the current even more, as long as we stay under 20 mA per segment. I measure 0.839 volts across a 75Ω resistor, giving a current of just over 11 mA. Not the doubling I had hoped for. We proceed:

Ohms    Voltage Current
        (volts) (mA)
----    ------- ------
1KΩ     0.010    0.01
100Ω    0.929    9.29
75Ω     0.839   11.00
47Ω     0.671   14.28
22Ω     0.412   18.73

And we have a winner! I think that over 90% of the rated current is plenty. No need to push it to ten tenths for this project. Also, it’s good to note that per the -203 data sheet, the maximum current into or out of any I/O pins is 25 mA, with a device total current of 150 mA. Since the plan is to illuminate only a single segment at a time, and that at only <20 mA, we should be good.

Now at first I found it odd that the voltage across the resistor was going down instead of up. More current means more voltage, yes? If this were a linear circuit, then yes. But as I mentioned before, the relationship between the forward voltage across a diode and the current is decidedly non-linear, and is, in fact, exponential in nature. So the forward voltage of the diode was also going up as the current rose. Tricksy little gizmos, these semiconductors!

Now comes a lot of wiring. It’s a good thing I like this part of the work. It’s something I have been doing for A Long Time Now and I’m starting to think that I’m getting pretty good at it.

Posted on Leave a comment

CH32V203C8 & TM1637 LED Clock – Part 5

26 March 2025

What should have been a simple exercise in setting up a couple of push button turned into a deep dive (again). There was no problem actually reading the input pins and printing out messages like “Button 1 pressed” in the foreground task. But I really wanted it to be an asynchronous process, so I set up the EXTI external interrupt controller to generate an interrupt for each of the input pins. First I forgot to specify which GPIO port was associated with each pin. The CH32V family has the same architecture for EXTI support as the STM32, if you’re familiar with those. There are sixteen (sometimes more) possible inputs, but they can be on any available GPIO pin, so a little mapping is required. But that mapping is handled by the AFIO (alternate function IO) controller, and that interface has to have its peripheral clock enabled before it does anything. I finally figured it out, but it was terrifically late and I was more than a little frustrated at that point.

In other news, I thought of a simple way to make the colon flash, so I’m going to try to do that now. At first I thought I would have to go back to updating the LED module every second, but it turns out I only need to update that one digit (digit 2, which has its decimal point wired up as the two colon LEDs) every second. Then I can set the STK compare value to half a second in the future (i.e., 72,000,000 HCLK cycles from the current STK counter value), then have the STK interrupt on compare match and that interrupt handler can clear the colon bits – again, just a single digit update on the LED module.

Surprisingly, that worked the first time. It’s not as distracting as I thought it would be. It makes me wonder if a “on for one second and off for one second” colon would be less distracting or just weird. Then I wouldn’t need a separate timer interrupt to clear the colon, instead just using an additional prescaler variable in the RTC interrupt handler.

Back to the user interface. I have a separate interrupt handler for each of the buttons, but only because I arbitrarily choose PB3 and PB4 as the input pins. The first five GPIO pins on each GPIO port can be routed to the first five individual EXTI interrupt vectors, while EXTI5 through EXTI9 share a single vector, as do EXTI10 through EXT15:

GPIO    EXTI        IRQn    Handler
----    ---------   ----    --------------------
0       EXTI0       6       EXTI0_IRQHandler
1       EXTI1       7       EXTI1_IRQHandler
2       EXTI2       8       EXTI2_IRQHandler
3       EXTI3       9       EXTI3_IRQHandler
4       EXTI4       10      EXTI4_IRQHandler
5       EXTI9_5     23      EXTI9_5_IRQHandler
6       EXTI9_5     23      EXTI9_5_IRQHandler
7       EXTI9_5     23      EXTI9_5_IRQHandler
8       EXTI9_5     23      EXTI9_5_IRQHandler
9       EXTI9_5     23      EXTI9_5_IRQHandler
10      EXT15_10    40      EXTI15_10_IRQHandler
11      EXT15_10    40      EXTI15_10_IRQHandler
12      EXT15_10    40      EXTI15_10_IRQHandler
13      EXT15_10    40      EXTI15_10_IRQHandler
14      EXT15_10    40      EXTI15_10_IRQHandler
15      EXT15_10    40      EXTI15_10_IRQHandler

The “Handler” names are the ones specified in the SDK-supplied startup file, startup_ch32v20x_D6.S. Write your own startup code and you can name them as you like.

The first thing I have to figure out is how to debounce these switch inputs. Like most mechanical push buttons, these little buttons I’m using in this prototype circuit have a certain amount of clickety-clack action when making contact. The internal contacts are literally bouncing several times before making firm and constant contact with each other. This shows up at the GPIO input pin as a series of transitions from high to low and back and forth several times before settling into a stable signal. I set up the EXTI trigger to look for falling edges, where the signal goes from a high level (no button pressed) to a low level (button pressed). It doesn’t measure how long it stays low or any other ‘quality’ measurement of the signal.

The simplest thing (almost always my favorite thing) is to just start measuring how long the button has been held down, and just ignoring any suspiciously short “presses”. We also don’t want to “accidentally” start setting the clock by inadvertently brushing against the buttons. We again turn to the STK to help us time this event.

The challenge is that this is something that shouldn’t be “handled” within the interrupt handler. The best interrupt handler gets in, gets the job done, and gets out – fast. Waiting around for an external event to happen is not on the list of Things That Are Done.

So what we can do, instead, is to set a flag that can be checked in the foreground task, i.e., the endless while() loop within the main() function. Why bother, then, with all the interrupt stuff at all, you ask? That’s an excellent question. It’s because eventually (from a development standpoint) the system will be 99.999% in a low-power mode and not actually executing anything… until something interesting happens. Having a full-time, wait-and-see polling loop in the foreground doesn’t work when the CPU is shut down.

In the foreground task, I check to see if the ‘set hours’ button, the one connected to PB3, has been pressed, by checking the flag set in the interrupt handler. If it has, we can capture the system time from the STK timer, then wait while the button is pressed to see if half a second has elapsed. If it has, we increment the hours counter, which has now been taken out of the RTC interrupt handler and placed in the global scope, then update the display. Once this has happened, we reset the elapsed time counter and keep going as long as the button is pressed. Once the button has been released, we clear the button pressed flag. Repeat for the ‘minute set’ button.

And for the purposes of this simple project, the UI is complete. That’s the third of three goals, so now is a good place to stop.

Posted on Leave a comment

CH32V203C8 & TM1637 LED Clock – Part 4

25 March 2025

Running overnight, the clock module seems to be keeping time well. I managed to minimize the hum it generates by wiggling the USB connection of the WCH-LinkE programming adapter, which is where the module and my development board are getting their power.

Another thought I had to minimize the excessive noise is to display the time one segment at a time, completely obviating the function of the TM1637 chip. I think I will save that trick for when I am implementing a direct-drive LED interface using just the -203 chip.

I’m also considering whether or not to implement the flashing colon function at all. To have it flash on and off at 1 Hz, I would need to let the once-per-second interrupt handler update the time display with the colon on, then trigger a separate timer function to interrupt after a half a second has passed, then just re-write only the second digit, this time with the colon bit cleared. But in reality, the flashing colon might prove too distracting in actual operation.

Alternately, I could halve the RTC prescaler and get a 2 Hz interrupt, eliminating the need to involve a second timer in the process. So many options! And yet, maybe not even worth doing in any case.

But today I should really focus on designing and implementing the user interface for this little clock so that I can easily set the time, when needed. I would also like to implement a simpler way to toggle daylight saving time on and off, rather than force the user to go through the whole time-setting procedure twice a year.

OK, I’ve decided to leave the colon on all the time, at least for now. I also moved a little code around so that the LED module only gets updated once a minute, instead of every second.

Now, about that user interface… The simplest thing I can get away with on this project is two push buttons, one to set the hours and the other to set the minutes. Each press and release will increment the count by one, while holding the button down will cause it to count up at around 2 Hz, rolling over when it reaches its maximum count. A bonus feature will be to toggle the daylight savings time mode by pushing both buttons at once.

I connected two momentary contact push button switches to GPIO pins PB3 and PB4, mostly because those were the next pins in the completely arbitrary sequence with which I have been assigning GPIO pins. Now to alter the GPIO initialization code to set them up as inputs with pull-up resistors enabled.

Posted on Leave a comment

CH32V203C8 & TM1637 LED Clock – Part 3

24 March 2025

Many hours later, I am still not seeing any LED segments that are compliant to my will. They just sit there, not illuminated, mocking me.

It did occur to me to fire up the old J4-led-key project, just to have a look at some working waveforms. They would not be exactly the same, as the TM1637 and TM1638 are slightly different, but similar enough to perhaps, just maybe, give me a clue as to what I am doing wrong.

However, in reviewing that older code, I see that I am currently sending all the data and commands as a single block, without intervening “start” signals. That could certainly be it. I will create a separate TM1637_start() function to drop the data line while the clock is inactive, which is how the TM1637 senses new commands incoming, whereas the TM1638 has a STB strobe/chip-select input pin to do that.

And that was it. It was the tiniest of little glitches on the waveform diagram in the data sheet, but they were labeled “start”, so that one was all mine.

The truth is that this implementation is still not correct. I can see on the oscilloscope where the -203 and the TM1637 are both trying to control the data line during the “ACK” acknowledge phase of the transfer. I don’t know what the long term effect of this will be to either chip. The correct thing to do would be to reconfigure the data line as an input for the duration of the ACK pulse, then set it back to being an output.

Now I have a row of very dimly lit segments glowing at me. It’s the top row of segments, by the way, what are usually referred to as the “A” segment (often as lower case, “a”) in the traditional seven-segment layout. I had sent a total of six (6) data bytes of 0x01 to be written to the display memory. This chip can actually address six digits, even though this module only has four connected.

Now that I can see the difference between “on” and “off”, I can now map out with some certainty the relationship between the bits I’m sending and the segments and digits that the chip is driving. All the schematic diagrams of the module that I have encountered in my searches show that the GRID1-GRID4 chip outputs are connected to digits 1-4 on the actual multiplexed LED assembly, with digit 1 being the leftmost.

Some code permutations later, I can confirm these mappings on this module. Additionally, the center colon is mapped to the decimal point of the second digit.

An important note is that whatever is written to the memory, stays in the memory, even if you overwrite other locations. For example, I omitted the command to write to the fourth digit, but instead of going dark, it remained illuminated with its previous value, 0x01. I think it best to completely update the entire display at once. This takes a total of ~175 us with my present code, giving me a maximum theoretical frame rate of 5714 Hz. That’s updating all six possible digits, and this module will only ever have four, so maybe we can push that framerate up a bit more? No?

This little LED module will display any combinations of segments that you wish. You can even adjust the overall brightness, but that is for the entire display at once, not for individual segments. Here are the available brightness levels and the associated command to set each one:

Brightness
(duty cycle)    Command
------------    -------
[off]           0x80
1/16            0x88
2/16            0x89
4/16            0x8A
10/16           0x8B
11/16           0x8C
12/16           0x8D
13/16           0x8E
14/16           0x8F

Cranking it up to 14/16 duty cycle gives a very nice, bright display. It also produces a great deal of electromagnetic interference (EMI) in the audio range, so if you have any amplified speakers or other sensitive audio equipment operating in the area, you’re going to hear about it. There’s probably a way to properly shield this module to minimize this unwanted radiation.

So lighting segments and digits is all fun and good, but the module has no reasonable concept of numbers or letters built in to itself. We have to provide the correct combination of segments in the right places for the time (or whatever) to be displayed.

Seven segment displays, not just the LED variety, have been a favorite of mine since I was a much smaller and younger person. And every time I build a device that uses these little devices, I end up hand-coding the look-up table to convert from numbers (and some other symbols) to readable glyphs. You’d think I would just look up the most recent project and copy those codes over, but you’d be thinking wrongly. If I end up publishing this article, then I’ll have a reasonably accessible place to find it in the future, assuming I ever do this again.

So one more time, here are the digits 0-9 as I like to represent them using seven segment displays, assuming the following bit position-to-segment mapping:

Bit Segment
--- ------------
0   a
1   b
2   c
3   d
4   e
5   f
6   g
7   decimal point

Digit   Value
-----   -----
0       0x3F
1       0x06
2       0x5B
3       0x4F
4       0x66
5       0x6D
6       0x7D
7       0x07
8       0x7F
9       0x6F

To illuminate the center colon, add 0x80 to the value of digit 2.

There are some other symbols that are handy to have around when dealing with clocks. As there is no dedicated “AM” or “PM” indicator on this display, we might need to spell that out for the user. The letter “M” is, shall we say, challenging, but the “A” or “P” would be easy enough, and most likely legible. Actually, the first six letters of the Roman alphabet, “ABCDEF”, are plausible, as long as you really mean “AbCdEF”. These digits come in handy if you need to display hexadecimal values on a seven segment display. Stranger things have happened. Having both “F” and “C” available is nice if you also want to implement a temperature function, without having to pick a side. Then, of course, you’d need a degree symbol “°”, just to be clear. A hyphen or dash is sometimes useful, for example to indicate a negative temperature, and it’s just the “g” segment lit up all by itself. Even easier is the blank or space character, which, like the concept of zero, is just nothing, yet meaningful in context.

Here is the list of the these other characters. They only take up one byte each, so it’s better to have them and not need them than to need them and not have them.

Glyph   Value
-----   -----
A       0x77
b       0x7C
C       0x39
d       0x5E
E       0x79
F       0x71
P       0x71
°       0x63
hyphen  0x40
space.  0x00

I can collect all those magical, hand-crafted values into a table and then look them up as needed. But before I can do that, I need to decide how, exactly, I want this clock to keep track of time.

The simplest possible clock that would be personally useful to me is a 12 hour lock displaying hours and minutes. Both the hours and minutes are composed of units and tens components. So the format of the display will be this:

Digit   Function
-----   ------------------------
1       Hours, tens
2       Hours, units, plus colon
3       Minutes, tens
4       Minutes, units

The simplest case is digit 4, the units component of the minute count. It will always be a digit between 0 and 9 and it will always be displayed. The next simplest case is the minute’s tens component. It will always be a digit between 0 and 5, and likewise will always be displayed.

Complications set in when we get to the hours counter. A traditional 12 hour format clock goes from 1 to 12, with 12 really meaning zero, at least to the 24 hour clock folk. Also, the hour’s tens digit will either be a 1 or not displayed. Fun stuff!

I previously discussed an approach wherein a periodic interrupt does the absolute minimum per iteration to update the representation of the time that will be displayed. This is my preferred approach in this situation and contrasts with simply keeping an abstract value representing the time, such as seconds past a certain point in time, then having to rebuild a “displayable” time from scratch every time it needs displaying.

I’ve already got the built-in real-time clock (RTC) peripheral of the -203 chip initialized and generating an interrupt every second. Right now it’s simply printing the count of seconds since startup on the serial console. I’ll need to add just a bit of code to make it do what I want. It will actually be semi-sorta interesting to look at, code-wise, once completed, but it’s very modular in nature and easily extendible. The “never nesters” out there are gonna Hate It.

While we’re not displaying seconds, we’re keeping track of them. Also, since the hours are a bit of a special case, we’ll keep track of them as a simple count. Within the body of the RTC interrupt handler, I declare a couple of static variables to hold these values:

static uint8_t seconds = 0; // not displayed, but we count them
static uint8_t hours = 0; // displayed after conversion

Then, deeper within the section of the interrupt handler that is specifically there to handle the once-per-second interrupt (there are two other ones in there, as well), I place the clock updating code:

seconds++; // increment seconds counter
if(seconds >= 60) { // seconds overflow
    seconds = 0; // reset seconds
    MINUTE_UNITS++; // increment minutes units
    if(MINUTE_UNITS > DIGIT_9) { // minute unit overflow
        MINUTE_UNITS = DIGIT_0; // reset minute units
        MINUTE_TENS++; // increment minutes tens
        if(MINUTE_TENS > DIGIT_5) { // minutes ten overflow
            MINUTE_TENS = DIGIT_0; // reset minute tens
            hours++; // increment hours
            if(hours >= 12) { // hour overflow
                hours = 0; // reset hours
            }
            if(hours == 0) { // special case for 12 hour clocks
                // spell it out
                HOUR_TENS = DIGIT_1;
                HOUR_UNITS = DIGIT_2;
            } else {
                HOUR_UNITS = hours % 10; // hours units
                HOUR_TENS = hours >= 10 ? DIGIT_1 : GLYPH_SPACE; // leading zero suppression
            }
        }
    }
}

TM1637_update(); // update the LED module

This handles the simple and most likely case, a new second that does not overflow the minutes counter. In the one-in-sixty chance that it does overflow, the units portion of the minute is incremented. When that overflow, the tens get updated. When that overflows, we start counting the hours. It was just easier to handle the hours as a single quantity, because of its two special cases.

I changed the seconds prescaler to 1 to watch the clock module count all the way around, which took 12 minutes. It was only slightly better than waiting the full 12 hours.

Now it’s running and I have a decision to make: Do I plunge into the “user interface” part of the project so that we can set the clock to the correct time (or continue to plug it in a midnight or noon, which works totally fine right now), or do I figure out how to make the colon flash?

Let me know your preference in the comments.

Posted on Leave a comment

CH32V203C8 & TM1637 LED Clock – Part 2

23 March 2025

Keeping the WCH-supplied SDK RTC example program handy, I will add the necessary portions to my original test program. The first task, as always, is proper initialization.

Or is it? Here is the mystery of dealing with peripherals in the “backup power domain”. It might be ticking over just fine as the rest of the chip wakes up from its slumber. But how to tell?

It seems I need a better understanding of the backup domain in general. The specific chip I’m using today, the CH32V203C8T6, is referred to within the documentation as the “CH32V20x_D6”, which is its specific classification abbreviation. This is what I call the “small 203”. It has 64 KB of flash program memory and 20 KB of SRAM. The “big 203” is either the CH32V203RB (64 pin package) or the CH32V208, available in various packages. They have a nebulous amount of flash and SRAM. It’s quite hard to tell from the documents.

But our “little 203” has ten (10) 16-bit backup data registers that are in the backup power domain and should retain their contents as long as VBAT is maintained. The bigger parts have 42 such registers. These backup data registers can optionally be reset to zeros when a “tamper” event is detected. To the shredders! We’ve been breached! No perilous secrets being kept here, so I’m not going to arm the tamper detector… just yet.

What’s odd to me is that the RTC_Init() function in the RTC_Calendar example sets backup data register 1 to the specific value 0xA1A1, as if to say, “I was here”. Yet the software never subsequently checks this location.

I’m thinking that I might keep the derived calendar values, assuming I progress to that level, in these very backup data registers. But I’m getting ahead of myself. How to properly initialize the RTC, but only if it needs it?

Assuming that the RTC will need to be initialized at least once, there has to be code to do that, even if I can’t yet determine when, exactly, to do so. So I will write a straight-through process that is proceeding as if it knows, truly, the RTC must be set up from absolute zero. It will be like booting up the original IBM PC with DOS, who always thought it was Tuesday, 1 January 1980 upon waking.

The first thing the SDK-supplied example does in its initialization is to enable the PWR and BKP clocks on the PB1 bus:

RCC_APB1PeriphClockCmd(RCC_APB1Periph_PWR | RCC_APB1Periph_BKP, ENABLE);

I seem to recall some confusion over whether or not the PWR clock actually has to be enabled or not, but that may have been specific to the -003 chips, as documented by CNLohr’s ch32v003fun repository. A very simple test indicates that it is, indeed, reset to all zeros at boot. So let’s enable them now, using the above code snippet.

The RTC, being special, does not have a peripheral clock enable bit in any of the usual places. It is controlled by the RCC’s Backup Domain Control Register (RCC_BDCTLR).

The RTC works exactly as one would suppose, generating a periodic interrupt, if so configured. Right now, I’m just resetting the RTC counter to zero, then using it as a seconds counter, and printing out the current value every time the interrupt fires. Here’s the preliminary version of the rtc_init() function:

void rtc_init(void) { // initialize on-chip real-time clock

    RCC_APB1PeriphClockCmd(RCC_APB1Periph_PWR | RCC_APB1Periph_BKP, ENABLE);
    PWR_BackupAccessCmd(ENABLE);

    BKP_DeInit();
    RCC_LSEConfig(RCC_LSE_ON);
    while(RCC_GetFlagStatus(RCC_FLAG_LSERDY) == RESET); // add time out
    RCC_RTCCLKConfig(RCC_RTCCLKSource_LSE);
    RCC_RTCCLKCmd(ENABLE);
    RTC_WaitForLastTask();
    RTC_ITConfig(RTC_IT_SEC, ENABLE);
    RTC_SetPrescaler(32767);
    RTC_WaitForLastTask();
    RTC_SetCounter(0); // set to midnight
    RTC_WaitForLastTask();

    NVIC_InitTypeDef NVIC_InitStructure = {
        .NVIC_IRQChannel = RTC_IRQn,
        .NVIC_IRQChannelPreemptionPriority = 0,
        .NVIC_IRQChannelSubPriority = 0,
        .NVIC_IRQChannelCmd = ENABLE
    };
    NVIC_Init(&NVIC_InitStructure);
}

And here is the interrupt handler, very much as it was when I lifted it directly from the SDK example code:

void RTC_IRQHandler(void) __attribute__((interrupt("WCH-Interrupt-fast")));
void RTC_IRQHandler(void) {

    volatile uint32_t rtc; // seconds from RTC

    if (RTC_GetITStatus(RTC_IT_SEC) != RESET) {  /* Seconds interrupt */
        //USART1->DATAR = '!'; // *** debug ***
        rtc = RTC_GetCounter();
        printf("RTC = %i\r\n", rtc);
    }

    if(RTC_GetITStatus(RTC_IT_ALR)!= RESET) {    /* Alarm clock interrupt */
        RTC_ClearITPendingBit(RTC_IT_ALR);
        rtc = RTC_GetCounter();
    }

    RTC_ClearITPendingBit(RTC_IT_SEC|RTC_IT_OW);
    RTC_WaitForLastTask();
}

I’ve found that there are two ways to keep track of time on a microcontroller, assuming you have a reasonably accurate time base and a periodic interrupt. One is to simply increment a counter every timer tick, which in this case is every second, and then translate that scalar value into a collection of more useful units, such as hours, minutes and seconds when needed. The second way is to do the “translation” in an incremental manner, as each tick occurs, since the typical case is advancing the seconds count and nothing more. Then you check for overflow into the minutes unit, likewise for the hours, and so on. But usually there is only ever one thing that needs updating, and this executes quite quickly with the right code.

I’ve even taken it farther and broken down the unit seconds and ten seconds groups separately, saving the nuisance of converting a binary value to decimal over and over. The same would apply to the minutes, hours and however far you want to go with it.

But first, it’s now time to start lighting up some LED segments and pretending to tell time. Then we can join the two pieces together and more properly tell the time with this circuit.

I had previously worked on a project that used a similar chip, the TM1638, with the “LED&KEY” module that has eight (8) seven-segment LED displays with decimal points, eight (8) discrete red LEDs and eight (8) momentary contact push buttons. The microcontroller interface is similar, but includes a “STB” (strobe) input that is used as a chip select line. The TM1638 can drive up to ten (10) seven segment LED displays as well as scan an 8×3 array of push button switches. While reviewing the code, it looks like I started with handling the bit wiggling interface in software, and left a note to add support for the SPI peripheral. In retrospect, I don’t think that is possible. But what do I know? I’ve been surprised by SPI hardware in the recent past.

What the interface is not is I2C. Per the data sheet: “Note: The communication method is not equal to 12C bus protocol totally because there is no slave address.” Good to know.

I’ve already set up the two GPIO pins I will need to talk to the TM1637 chip as outputs. Since there are no push buttons connected to the clock display module (yet), I won’t be needing to read back any data from the chip, so the data line can stay an output.

I’ll need to make a small adjustment to the GPIO initialization code as the TM1637 data sheet indicates that the “idle” state of both lines is high. Right now they are both low and I have no idea what the poor little chip must think of me.

Here is the summary of what the “Program Flow Chart” describes for updating the display:

Send memory write command: 0x40
Set the initial address: 0xC0
Transfer multiple words continuously: <segment patterns>
Send display control command: 0x80-0x87 = brightness, 0x88 = DISPLAY ON
Send read key command (we're not doing this one)

Right now I don’t know which address corresponds to which digit on the display. I’m also not exactly sure which bit corresponds with which LED segment. But I aim to find out. Let’s start out by sending a single bit set to all six of the available addresses, 0xC0-0xC5.

It seems I have misunderstood the part about the maximum clock frequency we can use to talk to the chip. The data sheet specifies the “Maximum clock frequency” as 500 KHz, with a 50% duty cycle; not the 250 KHz figure I quoted yesterday.

As the data line is only supposed to change when the clock line is low during normal data transmission, I will try to center the transitions within the clock pulses. With a maximum clock frequency of 500 KHz, each clock transition is 1 us apart. So to aim for the middle of the low part of the clock signal, we should wait 500 ns after the clock line goes low to update the data line. The SDK-provided delay functions, Delay_Us() and Delay_Ms(), only provide microsecond or millisecond time spans. Right now I’m only using Delay_Ms() to time the blinking of the on-board LED. It’s time to deploy some higher-resolution delay functions.

Actually, all I need to do here is to start the STK system timer in free-running mode at the full system frequency of 144 MHz to get ~6.9444… ns resolution. Then I can just pass in the number of clock cycles I want to waste in the delay, add that number to the current STK counter value, then wait for the STK counter to exceed that number. Here’s the STK initialization code:

#define STK_STE (1 << 0) // STK enable bit in CTLR

void stk_init(void) { // initialize system timer

    SysTick->CNT = 0; // reset counter
    SysTick->CTLR = SysTick_CLKSource_HCLK | STK_STE; // enable STK with HCLK/1 input
}

I had to #define the counter enable bit for the CTLR because it’s not #define’d anywhere else. The SysTick_CLKSource_HCLK value happened to be available in the RCC header file.

And here’s the actual delay() function code:

#define NS /7 // STK tick factor for nanoseconds
#define US *144 // STK tick factor for microseconds
#define MS *144000 // STK tick factor for milliseconds

void delay(uint32_t delay_time) { // delay for 'delay_time' clock ticks

    if(delay_time == 0) return; // already late

    uint64_t end = delay_time + SysTick->CNT; // calculate time to end
    while(end > SysTick->CNT) {
        // just wait
    }
}

Using the #define’d units NS, US or MS for nanoseconds, microseconds and milliseconds, respectively, you can eloquently express your desired delay time:

delay(500 NS);
delay(250 MS);
et cetera
Posted on Leave a comment

CH32V203C8 & TM1637 LED Clock – Part 1

22 March 2025

I found a little LED clock display module, driven by the Titan Micro Electronics TM1637 driver chip. I’d like to build a simple LED clock using this display module and a WCH CH32V203C8 RISC-V-based microcontroller.

I already had some of the “Blue Pill” development boards for the -203 chip from WeAct Studio. They are the same footprint as the STM32-based “Blue Pill” development boards, and fit nicely in a solderless breadboard. Another nice thing about these boards is that they already have a 32,768 Hz quartz crystal attached to PC14 and PC15, enabling the on-board real-time clock (RTC) of the -203. This one already had a WCH-LinkE programming cable built for it, a remnant of a previous project. This provides power, programming and serial communication lines from my laptop to the circuit. I added a purple wire for the NRST signal.

To wire up the TM1637 module to the prototype circuit, I will need another short cable. The LED module already has a four pin right-angle header soldered to it. The module needs +5V and ground, as well as digital clock and data lines. You know how I just can’t wait to build yet another custom cable for these projects. I’m getting pretty good at it, too.

I can’t really tell the pin numbering of the little LED module, but the individual signals are clearly marked on the PCB. Here is a description of the interface cable:

Pin Signal  Color   Description
--- ------  ------  ---------------
1   GND     black   ground
2   VCC     red.   +5V
3   DIO     green   data in and out
4   CLK     yellow  clock

The little LED module is skittering about on the desk quite a bit. I might have to 3D print a little stand for it. I don’t have a mechanical drawing for this module, but as they are still being sold online, I should be able to find one.

Looking online for some more information about these little LED modules, I see that I have the “v1.0” revision of the board, with a “CATALEX” logo and the date “02/10/2014” on the back. The current crop of boards available online show a “V1.1” revision, as well as square pad on the ground terminal, indicating pin 1. That was my guess, anyway. Sometimes I get lucky.

Note that this is the version of the LED module that has four complete seven segment displays and a center colon, but no decimal points.

The driver ship also supports scanning a small keyboard of up to sixteen (16) individual buttons, but does not support “n-key rollover”, so you can’t press more than one key at once. Well, you can, but the results are not guaranteed. To avail ourselves of this feature, I would have to tack on some wires directly to the chip on the back of the module. As the -203 has many as-yet unused pins that could be used for this function, we’ll keep that trick in our back pocket for now.

Having created a new MounRiver Studio 2 (MRS2) project for the software, named “C8-TM1637-clock”, of course, I can see that the USART serial lines are correctly connected and that the system is running at 96 MHz, which is the MRS2 default for these chips. I bumped that up to 144 MHz, because why not? The Blue Pill board already has a 8 MHz quartz crystal and 10 pF bypass capacitors (0402 packages – almost invisibly small) installed. Once all the “clockwork” of the clock is clocking clockfully, I can probably run the CPU from the internal RC oscillator, as the precision needed for keeping time will be the job of the 32,768 Hz crystal.

The Blue Pill board also has a blue LED mounted in active high configuration to pin PB2, via a 1.5KΩ to limit the current. Let’s blink that LED, just to make sure we can.

First, I create a new function called gpio_init() to set up everything. There, we enable the peripheral clock for GPIOB with this SDK call:

RCC_APB2PeriphClockCmd(RCC_APB2Periph_GPIOB, ENABLE); // enable GPIOB peripheral clock

Next, the first three pins of GPIOB are configured as push-pull outputs. I have arbitrarily decided to use PB0 as the clock line and PB1 as the data line for the TM1637 module. The code looks like this:

GPIO_InitTypeDef gpio_init_structure = { 
    .GPIO_Mode = GPIO_Mode_Out_PP,
    .GPIO_Pin = GPIO_Pin_0 | GPIO_Pin_1 | GPIO_Pin_2,
    .GPIO_Speed = GPIO_Speed_2MHz
 };

GPIO_Init(GPIOB, &gpio_init_structure);

No high-speed shenanigans are required, so I specified the lowest frequency, 2 MHz. The maximum clock speed for the TM1637 is 250 KHz. 2 MHz is overkill, but it’s the lowest setting available.

Within the main() function’s infinite loop, I put this code to blink the LED:

GPIO_WriteBit(GPIOB, GPIO_Pin_2, Bit_SET); // LED on
Delay_Ms(250); // short delay
GPIO_WriteBit(GPIOB, GPIO_Pin_2, Bit_RESET); // LED off
Delay_Ms(250); // short delay

And sure enough, there’s that blinking LED we all love to see early on in any embedded project. All is well with the world.

Now to set up the real-time clock (RTC) peripheral on this chip. It gets a whole chapter in the Reference Manual, Chapter 6. The manufacturer also supplies a code example called “RTC_Calendar” that demonstrates the RTC being set up and printing the time and date to the serial console, using an interrupt. We can peek at this code to get an idea of what is involved to get it clocking ourselves.

The RTC circuit on this chip is simple in its execution. It’s a 32 bit counter that has a programmable clock prescaler and choice of clock inputs. For time-of-day applications, it’s almost always going to be driven by a dedicated 32,768 Hz quartz crystal attached as the LSE (low speed, external) oscillator, divided down into one second pulses. Oddly, all the access and manipulation registers are only 16 bits wide. With 2^32 seconds of run time before it overflows, which is over 136 years, you’d think we’d be safe. But this is exactly the predicament we find ourselves in with the Unix Epoch, wherein the ancestors started counting seconds on 1 January 1970, thinking that the year 2106 would never come. Well, it will, and it’s only 81 years from the date of this writing. Your Humble Narrator fully intends to be complaining about things such as this well past this milestone in our future.

Since the madness of daylight saving time has yet to be expunged from our civilization, we also have to deal with that nonsense, if we’re going to have a clock that sorta-kinda reflects the societally-agreed-upon time. Leap years, on the other hand, are a completely natural and reasonable thing to handle, as the orbital velocity and rotational velocity of this planet are not (yet) tidally locked. One day, we can hope, it will be. Then peace will guide the planets and love, love will steer the stars. Until then, there’s a surprisingly elegant mathematical solution that should keep us pretty close for many centuries.

Another interesting thing about the RTC is that it is within the “backup power domain” of this chip. There is a separate power input pin on this family of chips for battery power, so that things such as the real-time clock and other critical functions can be preserved even when the flash & bang parts of the chip are powered up and down. There is a recognized division within the chip as far as access from one domain to another goes, in that there is a specific sequence of steps to be taken to reset and configure the RTC, even if the rest of the chip has been power cycled.

The Blue Pill board does not provide a separate battery connection, but instead routes the regulated 3.3V supply, via a Schottky diode, to the VBAT pin. I’m not too awfully worried about it at the moment. The next stage of this project, should it ever transpire, would be to create a bespoke PCB for the components and implement a direct LED drive circuit, obviating the need for the TM1637 circuit entirely. Alternately, a dedicated RTC chip with its own backup battery could be added to even the humblest -003 variant as another approach, using abundantly available modules.

There are three key ingredients to any successful timepiece:

1.  Keeping time
2.  Telling time
3.  Setting the time

Now you can also get fancy and add other functions, such as calendars, alarms, timers, and other really nice features. But the basic requirements of a useful clock must be addressed first. I’ve hinted at my solutions for the first two requirements (1: on-chip RTC, 2: TM1637 LED module) but haven’t talked about how we are going to set the time. The SDK example cheats, and just insists that “the initial time is 13:58:55 on October 8, 2019”, which was not when the software was published, so I assume it has some other significance.

Next steps will be to transfer over the appropriate code to set up the RTC, probably using a fake time as well, then start to get some of the LED segments glowing. The time setting user interface I will leave for last.

Posted on Leave a comment

CH32V00x – More Thoughts and More PCBs

13 March 2025

I’ve been thinking about some of my other assumptions with regard to these little chips and what it takes to program them. One thing is that I had been laboring under the false assumption that the interrupt vector table, should one decide to use one, must reside at location zero in the program memory. In truth, I understood that it could be located on any 1 KB boundary, and you can (must) tell it where by writing to the mtvec CSR. Per the QingKe V2 Processor Manual, Section 2,2 Exception, p. 4:

"It should be noted that the vector table base address needs to be 1KB aligned in the QingKe V2 microprocessor."

This limitation is not present on the other QingKe processor families.

So for a chip with a mighty expanse of 16 KB, you actually have 16 different choices available to you. One of the good reasons to stick the vector table at address zero is that it avoids any gaps in the program memory when you’re writing really small applications, as I tend to do with these chips. It totally doesn’t matter if you spread your ones and zeros across the entire continuum of available sites or cram them all at one end or the other.

Additionally, you could even point the mtvec CSR at one of two SRAM addresses, 0x20000000 or 0x20000400, and have an instantly reconfigurable vector table in volatile memory. I have no idea if this would actually work or not. You could even use the VTF mechanism to cover any interrupt requirements in the device set-up stage, as long as you only need two of them. The bigger chips offer four VTF slots.

I really need to design some new PCBs for all the incoming chips. One thing I noticed about the new CH32V002 parts was what looks like a ground pad on the bottom of the SOP8 package, the J4M6 variant. This appears to be an anomaly as the data sheet makes no mention of it. Additionally, the “photo” of the TSSOP20 package just has the identifier “813524E47” on it, and no part number or WCH logo, so these may just be placeholder photos until they can book some studio time for a proper photo-shoot.

I also see that there is supposed to be a QFN12 package with 11 available IO lines, the -D4U6 variant. What sorcery is this? It’s 2mm square. So tiny! I can’t wait to make some eensy weensy doo-dads with these little chips.

Other differences of note when compared to the original CH32V003:

12 bit ADC, 3 MS/sec sampling rate
8 channel Touch-Key channel detection
RV23EmC - hardware multiplication
4 KB SRAM
2.0-5.5 VDC system power supply
2 ms power on reset

Still no SPI on the SOP8 or the QFN12 packages. It’s not like I’m invested in understanding the SPI peripheral on this chip or anything…

As the SOP8 package still has both dedicated VSS and VDD pins, I can design a PCB that omits the solder pad, if it even really has one. I don’t expect to see the chips in person for at least another week.

The CH32V006 also have some upgrades when compared to the CH32V003 or -002:

62 KB flash
8 KB SRAM
2 USARTs
31 GPIO lines
    GPIOA PA0-PA7
    GPIOB PB0-PB6
    GPIOC PC0-PC7
    GPIOD PD0-PD7
Operational amplifier
3 timers, 2 watch dog timers, STK timer

So I will need a little prototyping board for each of the incoming chips:

CH32V002J4P6 - SOP8
CH32V002F4P6 - TSSOP20
CH32V006F8P6 - TSSOP20
CH32V006K8U6 - QFN32

I’d also like to design a DIP8 adapter for the SOP8 package that would let me use these chips as a drop-in replacement for the Atmel AVR ATtiny13 that is in absolutely everything I sell. I have a bunch of 1:1 pin-mapping DIP8 adapters for the SOP8 packages. They’re handy for breadboarding.

Posted on Leave a comment

CH32V003 driving WS2812B LEDs with SPI – Part 14

13 March 2025

Continuing my investigation into why the CH32V003 SPI port sometimes just locks up, I have looked at the source code for the two functions that are involved: The SPI_I2S_GetFlagStatus() function and the SPI_I2S_SendData() function. They both do exactly what you would hope that they would do.

What comes up as suspicious is my initialization of the port. Here are the values of the two control registers as well as the status register immediately after being initialized:

SPI_CTLR1 = 0xC154
SPI_CTLR2 = 0x0000
SPI_STATR = 0x0002

This varies from the final value of SPI_CTLR1 that I used in my assembly language version of the diagnostic: 0xC354.

So what’s the exact difference here? Using the SDK function to initialize the SPI port with what was just my best guess at what would be correct, we get the following bits set in CTLR1, versus what I told it to do:

Bit Field       SDK mine    Description
--- --------    --- ----    -----------
0   CHPA        0   0   clock phase (don't care)
1   CPOL        0   0   clock polarity (don't care)
2   MSTR        1   1   coordinator mode
5-3 BR          2   2   bit rate FCLK/8
6   SPE         1   1   SPI enable
7   LSBFIRST    0   0   not set = MSB first
8   SSI         1   1   select pin level
9   SSM         0   1   select 0=hardware, 1=software control
10  RXONLY      0   0   receive only mode (not used)
11  DFF         0   0   0 = 8 bit data
12  CRCNEXT     0   0   send CRC (not used)
13  CRCEN       0   0   enable hardware CRC (not used)
14  BIDIOE      1   1   enable output, transmit only
15  BIDIMODE    1   1   one line bidirectional mode

The only difference I see is that the SSM bit is cleared in the SDK initialization and set in mine. Since we’re not using the select line to select anything, it shouldn’t matter. It does matter that the NSS line is already set high before enabling the peripheral in coordinator mode. Per the RM, Section 14.2.2. Master Mode, p. 162:

"Configure the NSS pin, for example by setting the SSOE bit and letting the hardware set the NSS.  [I]t is also possible to set the SSM bit and set the SSI bit high.  To set the MSTR bit and the SPE bit, you need to make sure that the NSS is already high at this time."

And I know that it indeed does not work if the NSS is not set high before enabling the peripheral. The peripheral simply locks up with a “mode fault” error.

I added some code to print out the status register when a timeout occurs. I immediately see that it is always 0x0020, which means a “mode fault” has occurred. Here’s a list of things that can cause a mode fault on this peripheral:

When the SPI is operating in NSS pin hardware management mode, an external pull-down of the NSS pin occurs
in NSS pin software management mode, the SSI bit is cleared
the SPE bit is cleared, causing the SPI to be shut down
the MSTR bit is cleared and the SPI enters slave mode

Perhaps noise on the otherwise un-initialized NSS line is triggering an intermittent mode fault? Looking back, I see I have, in my ignorance, not specified which NSS handling strategy (hardware vs software) to use when configuring the peripheral.

Setting the SPI_NSS field to ‘SPI_NSS_Soft’ (1) when performing the SPI initialization, we get the following setup profile when the application starts:

SPI_CTLR1 = 0xC354
SPI_CTLR2 = 0x0000
SPI_STATR = 0x0002

So now it matches my bit-wise initialization of the control register. Now it’s time to let it run on ‘The Gauntlet’, as I have named my alternate test setup, overnight, and see what we shall see.