I'm sure you wouldn't want to use a keyboard that was inaccurate. Unfortunately, (as you might have seen me moaning in the past) the keyboard generates the clock signal. This basically means you need to be constantly checking the clock line to see if it goes low - that or use a hardware interrupt the jumps in and receives the scancode packet when the keyboard decides you want to stop sending something.
Well, I don't have the luxury of an extra keyboard chip or a hardware interrupt, so I needed to don my thinking cap. I had thought that one possible way around this would be to send a "disable" command to the keyboard after receiving bytes, running my handler code, then sending an "enable" command - these commands clear the keyboard's buffer, unfortunately, which is not good.
I downloaded another document on the AT protocol to see if they mentioned anything useful, and lo and behold:
The host may inhibit communication at any time by pulling the Clock line low for at least 100 microseconds.
I think I can spare 100us - I added the line ld a,1 \ out (bport),a to the end of my buffer-filling code from earlier (it fills a scancode key buffer) - and the code doesn't drop a single scancode. Result!
Providing useful functionality
All these keyboard routines do at the moment is to display the data coming in on the screen - not exactly a great use of them. What is really needed is a simple two-way handler - it calls two different user-defined routines based on what a key is doing, whether it is being pushed down or released.
For this, I should translate the scancodes into a new format - there are less than 256 keys, so there's no reason why I can't fit every single key into a byte.
As well as user-defined keyboard events, I'll have to add my own to handle toggling the status of the keyboard - the num/caps/scroll lock as well as the shift/alt/ctrl.
It's really quite simple to do.
; Now we need to run through all the keyboard events!
ld ix,_buffer ; Start at the beginning of the buffer.
; Let's assume it's a normal key:
cp at_scs_enhance ; Is it an enhanced key?
inc ix ; We know it's enhanced, so move along.
; Now HL points to our LUT, BC is the correct length
; and IX points to the next byte.
cp at_scs_keyup ; Is it a key UP event?
; Move to next chunk before we do anything
; At this point, A=scancode, HL->translation table, DE->handler, BC=table size, IX->next scancode.
; We now need to run the translation.
cpir ; Simple as that!
jr nz,_handle_next_scancode ; Not found
; So HL->scancode+1
; Here is where the magic happens:
; Now, A = 'real' scancode.
; We need to 'call' DE.
; We can spoof this easily:
push hl ; _h_n_s is on TOP of the stack.
push de ; our event handler is on top of the stack;
ret ; POP back off stack and jump to it.
; When the handler RETs it'll pop off _h_n_s and carry on scanning!
I created a couple of very basic event handlers - the key down event displays ? followed by the adjusted key code, the key up event displays ? followed by the adjusted key code.
What would be ideal would be to provide internal event handlers that could be called on keyup/keydown which would then jump over to the user's custom handler. These event handlers could look for special keys and adjust the keyboard LEDs and set internal flags that could be used to detect the status of certain keys. I'd need:
- Num Lock
- Caps Lock
- Scroll Lock
Unfortunately, I think that there is a problem with my byte-sending code. Setting the keyboard LEDs starts to do strange things - I now have two options;
- After switching status, ignore the LEDs. The status flag is set correctly internally, but you can't see it on the keyboard (which is a bit pants).
- Update the status flags every single time we run through the loop to check for any new bytes (which makes the keyboard lag like crazy - there's up to half a second of buffering going on!)
Mixing-and-matching the two - rewriting the branch before we bring the clock high again (for maximum speed) confuses the keyboard - the status LEDs never change and it decides to disable itself in a strop until I send $FF (the reset command) again. I think it's time to revisit the at_send_byte routine again to see what it's doing wrong!
Well, comparing it to my new notes - it's actually completely wrong at the end, when it comes to sending the parity/stop/ACK bits! A quick rewrite to how I think it should go isn't too hopeful - the keyboard LEDs flash like mad. Tweaking the timing by throwing in a few calls to _wait_bit_low and _wait_bit_high to synchronise my data to the clock stop this completely - and now the code is as it was before, at 100% accuracy - but about twice as fast.
Replacing my branch code still doesn't work all the time - sometimes the LEDs change, sometimes they do not. Not believing it would work, I threw in a check for the ACK bytes returned - if they were $FE, the 'repeat last command' byte, I'd send again.
My routines were clearly not as broken as I thought - the keyboard LEDs now change status perfectly, and as much as I hammer the Num Lock, Caps Lock and Scroll Lock keys, I cannot lock up the program or get the keyboard LEDs to display the wrong value. Not to mention that keying in other keys is back to the lightning fast response they used to be...