From blinking things to high speed serial communication: One man's quest to do all the fun things electronics has to offer.
Don't wanna be here? Send us removal request.
Text
Barcodes and Signal Recovery
▌▌▖▌▖▘▌▖▖▖▌▖▘▖▖▖▌▘▘▖▖▖▌▖▖▖▘▖▖▌▖▖▘▌▌▌
Last week I ran across an interesting project that caught my attention called dollcode by a GitHub user by the name of v01dlabs.
Of it's own, it's an exceptionally straightforward project: encode numbers and strings using a ternary code based on some simple Unicode characters ▌▖▘. I've long had a fascination with the way barcodes are read and the signal processing that goes with it, so I dug in to the working logistics.
Unfortunately, as I fiddled with it and tried to understand how one might decode it, it became increasingly clear that there were some limitations to the coding being used. For example, the encoding for "x" is "▌▌▌▌" and the encoding for the number 120 is also "▌▌▌▌". Similarly, the encoding p is "▘▘▘▘", which, if you don't know how tall the barcode is, looks exactly the same as the encoding for "(" which is "▖▖▖▖".
So given the seed and some knowledge about signal recovery, I decided to set out to make my own variation of the dollcode signaling method.
In my design, which I ended up calling signalbar, I added several features to constrain the coding, and improve its recovery with flexible optical media (printed paper) as the primary consideration.
Rather than encode data directly, a 4-to-3 encoding would be used; 4 bits of information using 3 trits.
The minimum unit of transfer would be a nibble. This is convenient in the 4-to-3 encoding, but also means that sending data smaller than 4 bits would require the cooperation of a higher level protocol.
Each symbol would actually come from one of three symbol tables based on the last bit of the previous symbol. This would allow for error detection and make use of the unused codes.
Two ▌ trits in a row would never appear outside of control symbols. This would make detecting control signals very easy.
Framing symbols would be added that could be used to detect the beginning and end of transmission. These symbols would be the same in all three encoding tables to simplify implementation. The start symbol would begin with a ▌▌ and the stop symbol would end with ▌▌, ensuring that the edges of the frame would always be max height and even if read backwards would immediately trigger the ▌▌ control sequence detection.
Symbols would be chosen such that no arrangement of the symbols could possibly create an infinitely long run, and symbols would be chosen in a way to minimize the maximum run length. This would aid in both clock recovery and detecting the code height on paper that wasn't flat since you could guarantee that you would see symbols of different sizes in a known and short window.
There are several things I think might be useful, such as frame length encoding and check codes, but those logistics are being left to higher level protocols. This implementation is mostly about the encoder/decoder hardware; so anything beyond what that is beyond the scope of this project for now.
With that said, the signalbar repo itself ended up being a powerful opportunity to do some work with GitHub Actions (I had been using Gitlab CI/CD up till now, and Jenkins before that) so that was interesting, and with some more time, I may add some explicit tests to verify the function of the module. But even without tests, it was a great way to bring pylint and mypy to bear, and made refining the code very nice.
However, this was largely just an amusement since I hadn't done a lot of personal programming over the last few years thanks to work. I don't know if I'll go far enough with this project to truly finish it up. dollcode, and now signalbar, are interesting, but I wouldn't particularly regard either as especially useful; just a neat way of exercising the ol' braincells. Nevertheless, I recommend you go check it out; there are some useful ways to think about signaling that are trapped up in the specific implementation that this toy example should make fairly clear (such as how 8b10b encoding solves the self-clocking issue.)
With that said, I have stuff in the pipeline that is more useful, but that writeup will have to wait for a different day.
#programming#barcodes#barcode scanner#barcode#python#python3#dollcode#signalbar#silly things I do to keep myself sharp and entertained
0 notes
Photo
On The Matter Of Shared Assets
I’ve been working a lot on a Ren'Py visual novel (more on that at some future date, novelling takes a LOT of time, and that’s when it isn’t also a programming/art/music project.) To help me make progress, I’ve been using a lot of free assets that folks have shared on the internet for the purpose. It’s been a massive boon for me, so I wondered if there was a way I could give back to these folks.
I’m not particularly notable as an artist, so just making a sprite or a background was probably not gonna work. I’m an adequate composer, enough so that my work actually kinda doesn’t sound out of place in the context of a VN’s background music. But that work is highly specific to my own VN, so I’d rather not share any of the assets I’m actually using since it would spoil the uniqueness of my work.
However, I somewhat arrogantly regard myself as a better-than-adequate programmer. In the course of writing my game it became apparant that I was going to need some mechanism for displaying instant messaging in a believable way. To that end, I wrote a quick library that would automate 99% of that work, and moreover, allow me to be very flexible in my handling of displaying it.
So I realized that an easy way to give back would be to make this automation framework available to everyone; hopefully helping the people who do all the cool art, but do Python kwargs unpacking not-so-well, make much better games.
As such, if you’re a Ren'Py novelist, you might give it a quick look-see over at itch.io or GitHub. Lemme know if it helps!
1 note
·
View note
Photo
Game jam!
Back in June I put together a game jam (my second) to try and bring together a team of artists and developers to produce a simple game in not more than 24 work hours. I ended up getting a team with 6 folks on it. Four worked locally, and one artist worked remotely.
My primary role was project manager / team lead. However, with 6 folks and 3 business days worth of time, you can't exactly just sit back and play manager all day so I did a bunch of odds and ends for the team including programming game mechanics (I handled the victory condition testing and game controller,) sound design, devops, etc. Nevertheless, I ended up spending appreciable time doing managerial tasks. My team had a wide diversity of people on it, and keeping them happy and focused on our objectives took an appreciable amount of effort.
This wasn't surprising, and turned out to be why my role was "project manager" this time rather than "programmer". With Meat Box we had realized after the fact that we missed a number of our secondary development targets because of coordination and focus. To be precise, the issue had been too much focus. Both the art and software teams at one point or another found themselves bound up on a task, and rather than trying for a bit and then refocusing on using an easier workaround, both teams ended up digging in their heels for several hours trying to get it to work the way that it was originally planned.
Having a dedicated project manager, whose first job was to keep the team on being effective rather than being perfect, was the indended solution to that problem. As such, my goal this time around was to keep my eye on the workflow and make sure everything went smoothly. However, we ended up with a new team dynamic this time due to some new members and I found myself quickly being involved more directly with human resource management. It was definitely challenging, but we got through it and delivered more than the minimum viable product for our objectives, which is always impressive to me given how bold we are in specifying the game.
If I'm being honest, I was especially amazed with the assets provided by the art team. I know it's probably because I'm not exactly a graphic artist, but they produced a TON of assets for us in the blink of an eye, with turnarounds that were fast enough for us to see them live typically by the time we were ready to use the asset in the level.
In general I'm glad we did this, and though I will have to take a very different tack in managing the project and human resources so that I can detect and diffuse as many issues as are possible in advance of the actual game jam, I think there is some merit to doing more of these.
0 notes
Text
Discorded*Gummy
As you may know, I've been working on Gummybot for a long time. He's monolitic, he's kludgey, and he's built to support IRC, which itself is ancient and kludgey. But much like IRC, Gummybot serves his intended purpose (teaching me PERL and filling in for the poor, dead owlbot) quite well.
Yet time moves on, and rather by accident, so to has the ATG chat. A side effect of me trying to make my IRC experience more productive and less awful was that I used a prefabed bridge bot (which is it's OWN set of problems, believe you me) to connect the ATG to a Discord server. Discord, if you haven't heard of it, is a text and chat video service hosted on the cloud. This means I can't kick it when it breaks without getting a tresspassing charge, but in exchange, it allows for push notifications to mobile and desktop clients, and it also maintains a log of all messages that are sent and organized them for review if you look away for a bit.
As a result of me building this bridge though, just about everyone moved over to the Discord end of that bridge. Though this is great for me since it means I can use Discord specific features to chat with most of them. However, this is not so great for poor Gummy. You see, the downside to the bridge is that it looks like a single user on both the IRC and Discord side. So if three people talk on Discord, on the IRC side, the nick is always the bridge's. The message body includes the end user, but anyone/anything that depends on the nick to identify the speaker has a problem. This includes some auto-reply tools, and obviously Gummy.
Well, I could retrofit Gummy to detect that the bridge is talking, and then attempt to rework the entire messaging system so that it's compatible with non-IRC nicknames (a huge undertaking with relatively little skill payout,) OR I could pick a new language (node.js) and a new communications library (Discord.js,) and re-implement the most critical functions that are nick-specific there instead.
And from that I annouce Discorded*Gummy! He's actually on his second revision already; the first was made months ago to bring the most critical function online: memos and nick-linking. Honestly what I've got currently doesn't appreciably change that. However, what it does do is something that was needed back at the very beginning with the original Gummybot that IRSSI's plugin system didn't allow for trivially: modularization.
Using an event handler, I've put together a set of tools that will allow me to add and remove individual modules without too much hassle. The advantage of this is that I don't have to have source files with 2000+ lines of code in them that don't have anything to do with each other. This will make continued development on the new Gummybot infinitely more plesant.
0 notes
Text
On VPN Tunneling
At the heart of this project are VPN tunnels. Since this is an overlay network, the physical links for the network are through several NAT isolated links to the Internet. As such, it is critical that network tunnels be used to establish a virtual cable between the various nodes in the network.
However, tunnels are always a bit of a weird experience. I’ve done a lot of work over the year with small-scale OpenVPN tunnels for connecting my own PC to my home network while roaming, but I knew there were several other tunneling technologies out there.
After researching a few of the other technologies I discovered that the big technologies out there were PPTP (an old tech I’d known about for over a decade,) L2TP/IPSEC, SSTP, IKE, and OpenVPN. The first two are supported directly in Windows, which would have been nice. However, each of them had their own problems.
PPTP is insecure. The initial tunnel’s authentication process is relatively easy to defeat and the PPP headers are unencrypted/unsigned.
L2TP/IPSEC isn’t well suited for NAT. Yes, there’s things you can do to enable NAT traversal by tunneling your tunnel, but at some point you realize that you’re sacrificing a ton of your MTU to headers. Also, L2TP/IPSEC is a pain to set up.
SSTP and IKEv2 aren’t well supported on Linux.
OpenVPN is fully encrypted and uses SSL/TLS authentication, but is somewhat proprietary.
At the end of the day, Oni and I had the most experience with OpenVPN coming into the project, and none of the other alternatives were compelling enough to make us learn a new server/client system. As such, we opted to use OpenVPN.
VPN tunnels come up in two places in the network. The first is in the router-to-router links. These are not setup as client-server relationships since they’re always one-to-one; essentially like the old PPP links used to connect routers. These were pretty easy to set up and debug.
The second place that VPN links come in is allowing clients to join the network. In Oni’s network, many of the computers participate directly on DotNQIG. However, for people who are roaming, or in my case where direct participation isn’t possible due to a quirk in my Internet gateway router (my router refuses to route packets back out the port they came in on) it allows me to join the network as necessary.
That said, even though I did have experience with OpenVPN before, one thing I hadn’t done before with it was mix it with OSPF/Quagga. That turned out to a be a bit more interesting.
In the context of the router-to-router links, it was pretty straightforward. I assigned a network that covered our peer-to-peer routing region (10.255/16) and quagga did a good job of inferring the necessary routes. However, the client/server links turned out to require a bit more work. The reason was that quagga inferred the available links from the tunnel interface’s PtP target. In client/server mode, this address isn’t likely to be valid. But beyond that, it would normally mean that routing entries would be propagated to EVERY ROUTER for each VPN client. As such, it was time to learn a bit about area summarization.
After adding an area summarization line to the quagga config, the end result is that so long as there were any routes in my 10.10.255/24 client VPN subnet, the whole network would get wrapped up into a single entry. First, this made the connections work, but secondly, this allowed me to have one or one hundred clients connected to the VPN server at the same time, and the OSPF routing table would see all of them as a single entry.
0 notes
Photo
DNS in DotNQIG is a bit of an interesting challenge. The reason is that we still want domain resolution to work for the Internet, but we want systems participating in DotNQIG to be aware of the .NQIG top level domain (TLD). In the real internet, this would be done by registering a new TLD with the ICANN root servers.
However, doing that is actually not possible. Although a few were added recently, like .photography, the reality of the matter is that ICANN does not add TLDs. But more than that, we wouldn’t really want them to; this is intended to be a private LAN, and none of the addresses we’re using are routable from the Internet anyway. So we had a problem: how do we overlay our domain system onto the real one?
The initial answer was forwarding. In both Oni’s and my network, we set up a forwarding server and some basic zones. The forwarder was simply pointed at our ISP’s nameserver. This worked well when we were the main nameserver on the network. However, when we merged the networks and slaved his .nqig and wikis.nqig records on my system, things started getting a bit more difficult.
You see, in Bind 9, if a forwarder is configured, any request that cannot be addressed authoritatively by the server is sent to the forwarder address. When the systems were separate, there was only one server in DotNQIG that could answer these queries. However, when we bound the two together, each with it’s own nameserver, a problem presented itself. se.wikis.nqig was a CNAME to web-server.jimthecactus.nqig. The wikis.nqig zone lived on Oni’s nameserver and was slaved on mine, and the jimthecactus.nqig zone lived on my nameserver. Resolving SubsequentErrors.wikis.nqig failed on oni’s system. The reason was a bit of nuance in the above statement. If Oni asked his nameserver for the address, it would know to check it’s own zone for wikis.nqig to get the address for SubsequentErrors, which it would happily return as being a CNAME for web-server.jimthecactus.nqig. Since it didn’t have zone information for the jimthecactus.nqig zone, this request couldn’t be answered authoritatively. As such, even though it knew what nameserver to ask for authoritative answers, it instead would send the request on to the ISP per the rules of forwarding.
This meant that my zones couldn’t be resolved. The same held true in the reverse direction about the oni.nqig zone. The only reason I was able to access the SubsequentErrors.wiki.nqig address on my system was because I had a slaved (and therefore authoritative) copy of both the .nqig and the .wikis.nqig domains.
The correction then was to use a full, caching server, rather than a simple forwarding server. This is a bit more resource intensive since the server is now keeping a cache of all queries and has to make queries against the real root servers rather than running through the ISP’s caching servers, but in exchange, it will fully work to resolve queries, even if it has to reach out to other servers to do it.
1 note
·
View note
Photo
Okay, so I said I'd cover DNS next, but I wanted to get this out first.
First, though I kinda hinted at it rather loosely, I got my friend Oni to do their bit of the network on Friday. This really helped me work out some stuff that wasn't immediately apparant with my fully private setup; DNS turned out to be a big thing I needed an external peer to see all of the problems with. So yay for having people to connect to! But that also means it was time for a new network diagram.
On that front, I did a bit of playing around with Dia, a FOSS tool for drawing diagrams. With a bit of work, I was able to get a network schematic put together. I was even able to keep my notation where hostnames are italic and bridges and networks are done in bold.
The DNS writeup will take a bit, but look for that either today or Wednesday, depending on how things go over the next few days.
0 notes
Text
On Routing Tables
When we decided to start this project, one parameter that was defined early on was that routing tables had to be dynamic; i.e. it had to be possible for new nodes to be added to the network without every person in the network having to hand add the new nodes. On the scale we’re working it’s probably not a big issue, but if nothing this would give us a chance to learn a bit about dynamic routing protocols.
We settled on OSPF as we were making a network that most resembled an inter-office network and OSPF is well suited for internal networks. As such we needed to lay down areas and setup an OSPF server on the routers to get them to share routing tables. The first step was to determine which software to run.
Turns out that there have been several competing servers out there for a few years, and some politics along the way that have ultimately lead to several projects ending up dead. Our initial search turned up with gated and zebra. Zebra seemed to be more full featured than gated. However, some research determined that zebra was no longer maintained. A new FOSS project had taken up the reins called quagga, amusingly named after an extinct zebra-like critter.
Ignoring that that doesn’t bode well for their project, we started getting things setup. In my initial tests I had set up a few areas and defined privbridge as the area 0. However, since I now had my friend’s network coming online and up to speed, it became apparent that doing that had broken some stuff for reasons that weren’t initially clear to me, so to get things moving I crammed all of the routers into area 0.
Upon further study into OSPF though, the reason became a bit more clear to me; OSPF requires that all areas are directly connected to area 0. This meant that either every router had to be added to area 0 where dynamic routes could exist,or a “virtual link” had to be made between them so that they were effectively on area 0. Because our network is small, the correct answer is that every router that participates in the backbone links between systems will be in area 0, and the router attached to leafbr will be in two areas. This isn’t strictly necessary as that complexity of the network only exists to ensure that we always have a thing two steps away to test against, but since this is about the journey and not entirely about functionality, a bit of synthetic complexity can be seen as a good thing.
Next time I’ll cover DNS. That was another one I’d done a million times before that ended up taking me 4 hours to figure out because of a quirk in the way overlay networks work.
0 notes
Text
On Building a Testbed
So the first step in this project turned out to be both easier, and harder than I had originally expected. The reason is that initially I had expected to have a lot of trouble getting the XEN systems prototyped and built. In the end, however, it turned out that XEN-Tools made it surprisingly easy to establish a template system and the build out from there.
What was less easy was making (and routing) the necessary bridges. As you can see from the network architecture diagram, the goal was to have two bridges so that from the most distant virtual machine to my main network was at least two hops. On those same lines, the router on my main network that handles my gateway to the internet isn’t smart enough to route packets between subnets on the same LAN port.
Ya, I know it’s not good practice to run two subnets over the same network, but this is a project about learning routing, it would have worked well enough.
In any case, this means that I needed to do masquerading at the final hop before it hit the physical wire. Masquerading is something I’ve done plenty of, and expected to not have to think too hard about, but boy was I wrong.
See, it happened that as long as packets originated from node3, everything was fine; the packets would be masqueraded with no fuss, and everything would work fine. However, any packets that came from web-server would come out of blocky without any source mangling at all! I added debugging counters and traced packets around, and did everything to try and figure out where my rules had gone wrong, and found myself running in circles trying to hunt this down.
In the end, it turned out that by default, IPTables inspects every packet that traverses a bridge. Since, at the time, I was using blocky to both provide the bridge, and also do the masquerading (something that node1 is now responsible for,) this meant that packets had to traverse leafbr and then privbridge to get to their destination. This normally wouldn’t be a big issue, it should just be like going across a link from one router to the next.
However, IPTables flags every packet it inspects, and once it’s routed a packet, it will not route it again. This may not seem like an issue (and actually would be great to prevent routing loops and minimize CPU load) but in this case, it meant that when the packet traversed leafbr, it was flagged as having been routed. Normally in the router-to-router traversal, this wouldn’t be an issue because as it moved from leafbr to privbridge it would have moved between two different routers and the flag would have been dropped.
But this is a virtual link, and leafbr, privbridge, and my physical ethernet link are all on the same computer, and by extension, IPTables is shared between them. Digging around online turned up a valuable nugget; that the bridge inspections could be disabled. Since I was fine with bridges not having control lists (I don’t need a smart switch or vLANs) I simply disabled it via adding “net.bridge.bridge-nf-call-iptables = 0″ to my /etc/sysctl.conf file. “net.bridge.bridge-nf-call-ip6tables = 0″ does the same thing for IP6Tables.
That said, with the shift to using node1 for routing packets to the physical LAN, effectively isolating the dom0 from the whole process, this change became moot and I have since removed it. But learning that IPTables is a touch-once mechanism, and also finding a bunch of other neat things like the IPTables Flow Chart, fully justified the experience.
0 notes
Text
Fixing the “Woops” Factor
Short update. I've modified the network topology from the one in the chart and added node1. node1 does a lot of the routing blocky used to do. Now, blocky has no ip on privbridge, and instead node1 has an interface on both the physical bridge and also on privbridge.
The reason for the change is that right now, if I’m working on the various routing components remotely and make an iptables rule that makes me unroutable, I can’t recover until I get back to a physical keyboard. By taking all of those rules away from blocky and assigning them to node1, I make it so that in a worst case scenario, I can connect directly to blocky and use a Xen console command to recover, either by rejoining node1′s terminal, or by rebooting node1 altogether.
0 notes
Photo

Presented here is the network topology for the DotNQIG project test bed. Chao is my main PC, and it played console for everything. blocky is the aforementioned XEN VM server and primary router. node2, node3, and web-server are all VMs running as DomUs under blocky. privbridge is the main virtual bridge for the VMs and has a connection to blocky to allow for traffic to be routed. leafbr is a pure virtual bridge that should (in theory) have no connection to blocky. I’ll cover why that “in theory” was necessary in a later article as that nuance cost me several hours.
Source NAT is used on all traffic routed through blocky. This is done because my wireless router will not let me create static routes pointing at anything other than the WAN interface. As such, traffic destined for the internet would not be able to return to the computer on the source network. The SNAT on blocky then allows for all traffic to appear to come from blocky’s physical ethernet port, regardless of where it actually came from inside the VM. Getting Chao to talk with web-server then was the true test as it’s only accessible via node3. I could have used DNAT to connect things, but the goal here is that I want to have full NAT-less communication with the darknet. As such, this meant I needed a more rugged solution, and I’ll cover the VPN link I used to do that in a later article.
I’ll see if I can get more written up in the coming days so you can see the rest of the project, along with any additional things that come up as I get my friends’ nodes added to the system and fully debugged.
0 notes
Text
Making an Internet: The DotNQIG Project
Over 10 years ago I went through Cisco’s CCNA program and learned the (hilariously flawed) basics of how the internet worked. However, I never took the test and ended up going into electrical engineering instead. However, I’ve worked to keep the more common of those skills up, but I never really went all the way into making a fully fledged internet with proper boundary routers.
However, some conversations over the last few years with some friends suggested that we put together an overlay network and create a darknet-like network where we can play LAN games without worrying about NAT. However, over those years, the real challenge has been getting them involved and active. Making a network means having more than one node, and my friends aren’t terribly good about actually moving on projects
Well after something like 4 years, I’m finally getting around to doing the project anyway; with or without them. The secret is working around my friend’s participation using a computer I’ve called Blocky; a XEN VM server that can host a number of VMs. Using Blocky. I laid out a number of nodes and a complete network with multiple paths, some unroutable except through other nodes. This allowed for me to play with all of the pieces I needed to get things working.
The end goal is to make a network that would be isolated from my main network, be able to access the internet itself, be able to host services (namely a web server with a Wiki on it that used to be hosted on a public server), and to automatically advertise and discover new routes.
I’ll be writing a series of articles about that experience and the weird things I learned (and am still learning) in the upcoming days. Hopefully I’ll get the first one written up sometime today and show you guys the network topology.
0 notes
Photo
Sometimes you just want to make better use of your tools. Sometimes you want to write neat code that does fun things. And sometimes you just want to jam out to some Daft Punk. This project does all of the above.
I recently learned about a media service called Media Player Daemon (http://www.musicpd.org/). MPD is essentially a headless, back end tool that is used to manage media files, create playlists, and then render them to an output (usually an audio playback device, but not necessarily.) This makes it excellent for installing it on small computers connected to a room stereo system. You can then connect to it from external clients on things like cell phones to control the player.
By itself that would have been pretty darn cool and a great second use for Mr. Wiggles, my Raspberry Pi based testing web server. But I have an LCD plate that I really wanted to integrate into the system. Enter RaspberryPiMPDJukebox.
The MPD Jukebox client isn’t really a novel idea (several projects have done something similar,) but I wanted to take a crack at multi-threading in Python and working with locks and queues. So I put this together as a client for the MPD server that I have running on Mr. Wiggles.
However, I did learn a few things along the way. The older Adafruit library I was using was apparently DOG slow. The work I initially did with threading helped mitigate a bunch of that, but button responsiveness was essentially non-existent because updating the screen cost everything the CPU had and was slow enough I could watch it update. The new libraries improved that by a LOT and also allowed me to reduce the transactional costs by reducing button checks to a single call to the LCD.
That said, something I’d really like to see is a swap from a GPIO expander, which must be polled (something that isn’t a good idea when working with a non-realtime operating system like Linux,) to a microcontroller based system. Sending commands to a microcontroller to ask if, when, and how long a button was pressed is a far better strategy than hoping a fast loop in Linux is going to get around to seeing things, especially since it’s possible to get blocked out by other bus transactions.
This will require a rework of the LCD plate itself. Not sure if that’s something I want to tackle at this exact moment, but it’s a project to be put on the table.
1 note
·
View note
Text
VHDL vs Verilog
Back when I was in school, I was originally trained on VHDL; an IEEE backed language used for developing on FPGAs. However, as I’ve looked into what people are doing commercially, it seemed a lot of people where using Verilog. As such, it seemed prudent to study Verilog and learn how to use it.
For those of you who’ve heard me talk about programming, what comes next should be no surprise: I hate Verilog. Not because I can’t make it work; far from it. I’d recently made a button advanced up counter with a debouncer and seven segment driver project just to work the chops a bit since I hadn’t done FPGAs in a while. It’s a simple project where you push a button and it counts up by exactly one every time. The reliability of this project is the core of the exercise since buttons beg for chatter and metastability. If you do it wrong, you can expect it to count by five sometimes, or randomly jump to a new number.
In any case, the VHDL version I wrote first took me a bit since I’m super rusty, but I got things up and running with each register and each net in the design being either obvious, or implied in a clear and obvious fashion. With Verilog, it took me less than half the time, and half of the code as well; despite the fact I had never coded in Verilog before. “Why,” you might you ask, “do I say then that I despise Verilog?”
Well, the parts that make it fast also make it much more difficult to write robust code on the first try. Just in my little program I tried adding a third state to my state machine. However, I forgot to expand the register that held the states, causing the system to cast a two bit number into a one bit register. The Xilinx Verilog synthesizer didn’t so much as blink at the truncation. The VHDL compiler not only would have complained, it would have demanded that I explicitly define whether the assignment intended to be signed, or unsigned, and would have barked at me at every turn.
You’d think that all of those warnings and “well of course I want to add it as unsigned!” would get on my nerves and slow me down. And it does. I won’t lie and say that I’m happy when I have to express every tiny little operation explicitly. However, it means I get it right the first time. My little register size snafu would have never compiled, and I would have fixed it. Here, I didn’t detect the bug until I went to simulate.
In a system this small, that’s not a big issue; it was caught in simulation, and the odd behavior it caused wasn’t really that bad. However, in larger systems that’s a lot less likely to happen. Yes, working with deep hierarchies and fine resolution test benches solves this problem. However, if the reason you’re switching to Verilog is to save time, the odds are good that you’re not going to choose to erase all of those savings by spending a million years writing test benches.
Missed states and assignments like this can cost you hundreds of hours of debugging, and depending on the product, millions of dollars in recalls (let alone if you’re doing things where life and limb get involved,) you can see why a language that lends itself well to strict typing is a good thing, and why I’d favor it. Ya it’s a pain. Ya it barks at the simplest, most obvious of things. However, it encourages you to do it right, and be sure it knows what you mean. As such, when you inevitably cut corners on your testing, you’re that much less likely to let something that’s going to ruin a bunch of folks’ days through.
0 notes
Text
In Related News, It Works
So despite the temp ban (it was lifted after an hour or so,) I continued work on the relay and was able to prove that the GroupMe/IRC relay worked. I’d really rather have a cellphone-centric IRC server/client pair that allows a cellphone client to subscribe to an IRC channel via push rather than have to go through an intermediary, but that involves making 3 parts (IRC client, push server, phone client) and this only requires one.
That said, there’s an open question about whether GroupMe’s API is really suited for this kind of work, but in the short term I’ve solved the cell phone comms issue. To say that I’m beyond ecstatic would be an understatement. ^_^
0 notes
Text
IRC Bans And You: Lessons in Developing Politely
Well, I knew I’d run into it eventually, but with enough tinkering you are eventually bound to run into a limit somewhere when you use other people’s equipment. To that end, some prototyping work I was doing on building a GroupMe/IRC bridge caused me to momentarily go over the ban threshold for session counts on the IRC server I normally work with.
In retrospect, it may have been wise to setup my own IRC server to run the experiments against, but at the end of the day it’s a little late for that realization.
Of it’s own right, the ban doesn’t affect me terribly as I wasn’t terribly active on the server involved and I CAN just setup a new server and continue my research, but i was running some support software for some friends that can no longer be expected to work, at least until I figure out if it’s even possible to apologize and appeal.
To that end, there’s some degree of urgency to getting things figured out, but I’m not readily finding a way to do that as the system administration doesn’t seem to have any public contact points that aren’t on IRC.
1 note
·
View note
Link
More than a year into this social coding thing and I wish I had seen this when I started. I’ve ended up at the conclusions recommended here anyway since the various teams I’ve contributed to already use this method, but it’s nice to have such a handy reference to throw at folks when I find myself leading the team.
0 notes