I've been focusing recently on weight and health. Haven't really been talking about it much for a few reasons (except on the Geeking After Dark podcast...) but the biggest recent development was the acquisition of recumbent trikes.
It had been quite an adventure in itself (dear Customs and Border Protection: eff you) but my son and I finally got our modified First Avenue trikes!
At first we primarily rode in our driveway; we made 6 circuits the first day, then 5 the next. It was our basic shakedown, learning the handling on the trikes and getting a feel for the brakes and seating. We also needed to build familiarity with the shifting, because our trikes are equipped with Nuvinci hubs instead of relying on derailleurs for changing distinct gears.
(By the say, the Nuvinci hubs are utterly amazing...expensive, but amazing.)
There was a lull in riding because Little Dude had a friend over for a few days, and after that, weather decided it didn't want to cooperate. But the weekend rolled up, the rain broke, and we were determined to try riding on the road!
We broke down the trikes; in order to rack them, the seats, trunk bags and accessories have to be removed. Once the trikes were secured to the car rack, we got the equipment fit into the back seat of the car and filled our insulated water bottles (flavored with tablets that add caffeine, vitamins, and some refreshing...somethings...to keep you from feeling like you want to pass out while exercising) before trekking to a valley area about half an hour from our home.
The area we rode is near my childhood home; I was relatively familiar with the area due to working at a historic site and attending the church in that valley area. Other than those two things the area is populated by farmers and was mostly a closed, paved loop.
The closed circuit path meant that it was mostly local traffic, but it was decently paved as a two-lane road (without a center line, though; it was kind of narrow in places, but I figured the sparse traffic would make this a nice introduction to road riding.)
In my head I picture the path as an elongated 2-dimensional Pokeball; the top half is a higher elevation, and the middle of the circuit is bisected by a packed dirt/gravel road, which would have cut the travel in half and avoided having to climb the hill to the upper part of the road circuit.
I had forgotten about that bisecting road; my father reminded us about it when asking about our route. I have been exercising a little using a pedaling machine under my desk (not a perfect simulation of a recumbent bike, but better than nothing) for several months as well as some basic workout routines from a fitness specialist. My son hasn't been working out; aside from our rounds on the driveway (which, to be fair, is about 600 feet long and rises a little under 20 feet from entrance to parking flat) riding the recumbent on the road was kind of cold turkey exercise for him.
We de-racked the trikes and re-equipped them, mounting the trunk bags, water bottles and seats. We sat down, adjusted mirrors, and I explained the proposed route along with reiterating my warnings about watching for traffic and staying to the right (I was perpetually anxious about our road riding since he is not experienced with driving, let alone traveling on the road.)
"Little Dude, we have two options. The first is the one I was first thinking of...it's paved, but it's longer, and there's a big hill climb to deal with. The other is the one Grandpa mentioned; it's a packed gravel road that cuts travel in half and avoids the hill, but it's going to be bumpy. Which one should we take?"
He thought about it for a moment and said, "We'll take the long way."
I was so proud of him!
"Okay. Ready?...let's go!"
I launched Strava, an app on my iPhone for tracking our exercise stats, and we started our ride. The first part was relatively flat; some small inclines, but nothing we couldn't really handle. Little Dude was slower due to not having worked out and developed the leg muscles needed for the leg presses that pedaling recumbent trikes model. I did get ahead of him at times, but I kept an eye on the mirror and if he started to fall pretty far behind I'd pull off and wait for him to catch up.
We did pass some family friends who were out for a walk with their dog. I didn't know they even had this dog...I told my parents that I didn't realized they had a pet bear, because this thing is the size of a polar bear cub. I mean, it's HUGE. And fluffy. It was a giant white furball the size of a dwarf horse, and it was at least as tall as my head was positioned on the recumbent trike.
"Oh, that's Rufus," my parents said.
"Rufus. They named this bear-sized dog Rufus." I had trouble wrapping my head around the juxtaposition of a pet named Rufus that looked big enough to pull a sled of kids.
We said hello as we rolled by them and the dog just sort of gazed quizzically at the two overweight riders on the weird tadpole machines; I was thankful it didn't decide we were invading its space and attack or bark, as I was certain the force of the bark might blow us into the corn field.
At one of the pause points, I pointed out that we were approaching the hill.
"Last chance. We can turn here and take the halfway road, or we continue up that," I said as I gestured towards the visible escalation in pavement.
Little Dude rested a few minutes, took a swig of water and said he was ready to go up the hill.
Oh gawd...the hill was tougher than I thought. I had to stop a few times on the incline, as my muscles would hit the point of failure. If I was hitting that, I knew Little Dude was having it harder than I was. I could see in my mirror that he was stopping along the road, but after a few minutes, I'd again see his feet pumping the pedals as he made progress forward again.
We paused several times. I didn't mind; I was amazed Little Dude, who was not accustomed to this kind of physical work, was still soldiering on. Forward progress was forward progress!
He caught up to me. I pointed at the house in front of us where the road sharply curved into the grove of trees; "We're not far now. Once we hit that curve, we not only have shade, but the road doesn't keep climbing like this."
He didn't really seem to believe me, but at this point we didn't have much choice but to continue on. "Ready?"
We kicked forward again. Eventually we took the curve and stopped in a driveway where we didn't have to worry about traffic and could enjoy the slight breeze.
Little dude was red in the face and had rivulets of sweat dropping from his head and darkening his shirt. "I can't feel my legs, Dad," he said.
"You mean they feel like something is wrong, or they're tired?"
"I think they're just tired."
Dude is sensitive to dehydration, and it was hot today. He was drinking from his water bottle but I knew that it would be running low by now. We rested a bit in the shade, hands laced behind our heads to allow our lungs to expand wider and take in more oxygen, before I asked him if he was feeling better.
"I just need a minute or two," he said.
"Do you want to call it quits," I said. I figured I could run ahead and get the car, pack up my trike and return for him. He was looking really tired and I was a little concerned about how red his skin had become.
Again he thought a moment before replying, "I'm not going to quit, Dad!"
I can't really describe the pride I felt, seeing him push through his aches and sore legs to keep moving forward on his first real ride on the trike. "I'm not going to quit." He was not taking the easy way out!
"Okay. We'll keep going. Tell me when you're ready!"
We had made it up the steepest, longest part of the ride. We had a relatively flat ride before hitting the downward portion of the trip; our bike computers registered a top speed of a little over 28 miles per hour (or, as he put it, "THAT WAS AWESOME!") At the peak speed, we zipped by an older couple sitting on their porch. If Rufus thought we were strange, I couldn't imagine what this couple thought of the two overweight guys on these weirdly configured wheeled lawn chairs were doing as the tires hummed along the pavement and the pilots whooped with glee at the air whipping through our hair.
We pulled off the road and stopped next to the car. Despite feeling pretty good, standing up proved to be a challenge, as my blood pressure felt like it was dropping dramatically as I stood upright for the first time in over an hour. Little Dude asked for a few minutes before having to peel himself out of his seat and disassemble the trike for racking.
I slowly released the pins and quick releases on my trike that held the seat to the frame, disconnected accessories and bags, then steeled myself to get the trike lifted onto the rack. "Take your time, Dude," I said. "You have a lot to be proud of."
I had packed protein bars for us with the intention of stopping at a picnic area on the path to rest, but as our water ran low and there was a threat of impending rain, I nixed the idea. Little Dude had scooped up an empty Red Bull can with the intention of giving it to his Grandfather for recycling and deposit redemption; their house was on the way home, so I figured we'd stop, get more water and have our protein bars while visiting.
Strava said we spent 42 minutes of actual travel time (it pauses automatically if GPS doesn't show us moving) and had climbed 263 feet over a trip of 4.28 miles. I don't think that was bad at all for a first trip out!
We've started learning a few things about the trip. We remembered to mount the blinking red lights to increase our visibility...but forgot to turn them on (halfway through the trip I activated them.) We also brought helmets, but forgot to actually wear them, which didn't matter quite as much since trikes are a little harder to tip over and state law didn't require the helmets for people our age. If there were more traffic, I'd have turned around to grab them. As the situation was...I let it go, figuring we might feel a little cooler with a breeze as the sun was beating down rather hard for the first legs of the journey.
I think we also need to have more water-carrying capability. Next time we're in the shop, I think I'll ask them to install an additional water bottle cage on each trike and we'll go shopping for another insulated bottle. In the meantime I ordered a set of pannier bags for my trike so I'll have increased cargo capability, then I'll see if I can find something that can hold water without fear of spillage in the panniers.
The weather has once again decided to work against us...we've had days of rain, complete with a constant flash flood watch culminating now into a flash flood warning...so we haven't been out riding again. But we do plan to head out again this weekend. The weather is predicted to break and I've been scoping out a possible bike path to try an hour away. Little Dude is looking forward to the next ride, and I have to admit that I'm more than a little anxious to hit the pavement again as well.
I recounted our trip to my wife, and we were both extremely proud of Little Dude and how hard he worked to keep going. He felt bad about the slower pace of the trip, feeling that he was holding us back...but I told him, truthfully, that there was no reason to feel bad. He was working on developing the leg muscles, he wasn't used to riding, and we both had work to do to get more proficient in riding. I had no problem with our pace...the important thing was we did it, and we pushed on. He didn't take the easy road or cut corners.
And more than that, he was still looking forward to the next ride! For now, we're keeping an eye on the forecasts and will have the trikes ready to rack. Allons-y!
Wednesday, July 25, 2018
Sunday, June 17, 2018
Why the Blog Hiatus?
In the past few years (wow, has it really been a few years already?) things have really changed for me. We (meaning my family) have had some upheavals and trials. For a while it seemed like there was no hint of good news without a wave of misfortune following close behind.
I'm the type of person with a brain wired for routine. I managed to keep some constants, such as recording the Geeking After Dark podcast with regularity, which helped maintain the illusion of control over my own life. I had a friend with contacts that managed to help me find employment after going back to rural PA. And of course there is the constant barrage of terrible that is the state of our country under the current administration, and worse, the support the people of this area show for the ideology of said administration, which does little to quell fear of what other people will do if given the opportunity to show their "true colors" without being held accountable for their actions.
I didn't realize it at the time, but the combination of stresses were taking a toll. Both my physical and mental health were in decline until it reached a point where I couldn't keep turning a blind eye to the situation.
Back in the beginning of January I had a "coronary incident;" my wife took me to the ER in the wee hours of the morning and I spent the next day having tests run (ever have a catheter run through your arm to see if you have a blockage to the heart? Highly not recommended.) It turned out to not be a heart attack, but pericarditis, in which an inflamed sac of tissue around the heart swells and imitates the first symptoms of a heart attack.
Ironically, I had already agreed to take the preliminary steps for a more thorough admission to a weight clinic at the hospital, and my first appointment was three days after my discharge from the ER.
I've had appointments to address a wide range of issues from the stresses and...while I hesitate to call it this...the psychological trauma that has been building up over the previous two years. I have doctors coordinating in fields across endocrinology through bariatrics to try making progress on my health. Some of this I've already brought up in the podcasts, some of it I never really talked about because I didn't think it was worth mentioning.
I've been working a lot on my programming in Golang at my new job position, and thoroughly enjoying it. Usually I'd blog some thoughts or tidbits about what I learned. Then one day I looked at this blog and realized I hadn't written anything in months...I couldn't believe how much time had passed while I was in some kind of mental fog.
That isn't to say I haven't made progress. Since January 10th, I've lost 106 lbs. I've gone off several medications while cutting back on others.
Doctors have not only been forcing me to work on eating "healthy" stuff like...vegetables (yuck) but to exercise more. My current job is heavy on the sitting-at-the-desk duties, so I've been using an under-desk pedaling machine to work my legs. As summer approached I started looking at an alternative exercise that was tolerable for my taste and lifestyle; I started investigating recumbent biking. As I type this, there are two recumbent trikes on their way to an almost local recumbent dealer earmarked with my (and my son's) names.
While treating some aspects of my various diagnoses had made progress, one side effect has been a diminished passion for side projects. My work on side programming projects has slowed down, but I still have an idea bouncing in my head that I'm thinking of trying to explore.
So, why the blog hiatus? It boils down to the stresses of the past couple of years being addressed, and not realizing how much time had passed. I haven't disappeared. Nothing tragic happened. I've simply changed my focus for a bit. As a result, my health, so far, has been slowly improving, and I've been nursing a new obsession with recumbent triking.
If things continue to go as I hope, I'll get back to blogging more. I'll continue programming and making progress on the learning front. And maybe, just maybe, I'll continue to improve my diet and exercise lifestyle changes with the help of a new recumbent trike!
I'm the type of person with a brain wired for routine. I managed to keep some constants, such as recording the Geeking After Dark podcast with regularity, which helped maintain the illusion of control over my own life. I had a friend with contacts that managed to help me find employment after going back to rural PA. And of course there is the constant barrage of terrible that is the state of our country under the current administration, and worse, the support the people of this area show for the ideology of said administration, which does little to quell fear of what other people will do if given the opportunity to show their "true colors" without being held accountable for their actions.
I didn't realize it at the time, but the combination of stresses were taking a toll. Both my physical and mental health were in decline until it reached a point where I couldn't keep turning a blind eye to the situation.
Back in the beginning of January I had a "coronary incident;" my wife took me to the ER in the wee hours of the morning and I spent the next day having tests run (ever have a catheter run through your arm to see if you have a blockage to the heart? Highly not recommended.) It turned out to not be a heart attack, but pericarditis, in which an inflamed sac of tissue around the heart swells and imitates the first symptoms of a heart attack.
Ironically, I had already agreed to take the preliminary steps for a more thorough admission to a weight clinic at the hospital, and my first appointment was three days after my discharge from the ER.
I've had appointments to address a wide range of issues from the stresses and...while I hesitate to call it this...the psychological trauma that has been building up over the previous two years. I have doctors coordinating in fields across endocrinology through bariatrics to try making progress on my health. Some of this I've already brought up in the podcasts, some of it I never really talked about because I didn't think it was worth mentioning.
I've been working a lot on my programming in Golang at my new job position, and thoroughly enjoying it. Usually I'd blog some thoughts or tidbits about what I learned. Then one day I looked at this blog and realized I hadn't written anything in months...I couldn't believe how much time had passed while I was in some kind of mental fog.
That isn't to say I haven't made progress. Since January 10th, I've lost 106 lbs. I've gone off several medications while cutting back on others.
Doctors have not only been forcing me to work on eating "healthy" stuff like...vegetables (yuck) but to exercise more. My current job is heavy on the sitting-at-the-desk duties, so I've been using an under-desk pedaling machine to work my legs. As summer approached I started looking at an alternative exercise that was tolerable for my taste and lifestyle; I started investigating recumbent biking. As I type this, there are two recumbent trikes on their way to an almost local recumbent dealer earmarked with my (and my son's) names.
While treating some aspects of my various diagnoses had made progress, one side effect has been a diminished passion for side projects. My work on side programming projects has slowed down, but I still have an idea bouncing in my head that I'm thinking of trying to explore.
So, why the blog hiatus? It boils down to the stresses of the past couple of years being addressed, and not realizing how much time had passed. I haven't disappeared. Nothing tragic happened. I've simply changed my focus for a bit. As a result, my health, so far, has been slowly improving, and I've been nursing a new obsession with recumbent triking.
If things continue to go as I hope, I'll get back to blogging more. I'll continue programming and making progress on the learning front. And maybe, just maybe, I'll continue to improve my diet and exercise lifestyle changes with the help of a new recumbent trike!
Monday, March 12, 2018
Golang: Is The Mutex Is Locked, And Finding The Line Number That Did It
Quick summary of the situation, giving enough details to highlight the problem but not giving proprietary information away...
I have a program that queries a service which in turn talks to a database. The database holds records identified by unique rowkeys. I want to read all of the records, as far as the database knows of their existence, which I can get through an API call, iterating in discrete steps.
The utility I created pulls a batch of these keys, then iterates over them one by one to determine if I want to make a call to the service to pull the whole record (I don't need to if the key was already called before or previously analyzed on a previous run of the program.)
Seems relatively simple, but this is a big database and I'm going to be running this for a long time. Also, these servers are in the same network, so the connections are pretty fast and furious...if I overestimate some capacity, I'm going to really hammer the servers and the last thing I want to do is create an internal DDoS.
To that end, this utility keeps a number of running stats using structs that are updated by the various goroutines. To keep things synced up, I use an embedded lock on the structs.
(Yeah, that's a neat feature...works like this:)
After that, it's a simple matter of instantiating a struct and using it.
Because the utility is long-running and I wanted to keep tabs on different aspects of performance, I had several structs with embedded mutexes being updated by various goroutines. Using timers in separate routines, the stats were aggregated and turned into a string of text that could be printed to the console, redirected to a file or sent to a web page (I wanted a lot of options for monitoring, obviously.)
At some point I introduced a new bug in the program. My local system was relatively slow when it came to processing the keys (it's not just iterating over them...it evaluates them, sorts some things, picks through information in the full record...) and when I transferred it to the internal network, the jump in speed accelerated exposure of a timing issue. The program output...and processing...and web page displaying the status of the utility...all froze. But the program was still running, according to process monitoring.
I first thought it was a race condition...something is getting locked and not releasing it. But how can I tell if a routine is blocked by a locked struct? Golang does not have a call that will tell you if a mutex is locked, because that would lead to a race condition. In the time it takes to make the call and get the reply, that lock could have changed status.
Okay...polling the state of mutexes is out of the question. But what isn't out of the question is tracking when a request for a lock is granted.
First I changed the struct to have an addressable member for setting the state of the lock.
Next I created some methods for the struct to handle the locking and unlocking.
LockIt() and UnlockIt() methods are now added to instances of stctMyStruct. When called, the function first sends a string into a channel with a dedicated goroutine on the other end digesting and logging messages; the first acts as a notification that the caller is "going to ask for a change in the mutex."
If the struct is locked, the operation will block. Once it is available, the function returns, and in the process runs the defer function which sends the granted message down the channel along with the elapsed time to get the request granted.
How does it know about the line number?
There's actually a library function that can help with that; my problem is that it returns too much information to not be a little unwieldy. To get around that, I created a small wrapper function.
If you look at the documentation you can get the specifics of what is returned, but Caller() can unwind the stack a from a call by the number of steps you use as an argument and return the line number, package/module, etc...in my particular case I'm using one source file so I only needed the line number.
Using this, you can insert function calls to lock and unlock the structs as needed. I added the methods to each struct that had a mutex or rwmutex. Using them is as simple as:
This solution provided a way to trace what was happening, but there is a performance cost. Defer() adds a few fractions of a second each time it's called and I used a lot of locks throughout the program which added up to a significant performance hit. Using this technique is good for debugging, but you have to decide if you want to incur the overhead or find a way to compensate for it.
So what was my lock issue?
I set the goroutine monitoring the locks to dump information to a file and traced the requests vs. granted mutex changes. There was a race condition in a function I used that summarized aggregated information; A lock near the beginning of the summary was granted, and while pulling other information, it requested another lock. The second one was an operation on a struct that was held by a process waiting to get a lock on what was being held by the beginning of the summarize function.
It was a circular resource contention. Function A held a resource that Function B wanted, and Function B had a resource function A wanted. The solution was to add more granular locking, which added more calls but in the end meant (hopefully) there would be only one struct locked at a time within a given function.
Lesson learned: when using locks, keep the calls as tight and granular as possible, and avoid overlapping locks as much as possible or you may end up with a deadlock that Go's runtime wouldn't detect!
I have a program that queries a service which in turn talks to a database. The database holds records identified by unique rowkeys. I want to read all of the records, as far as the database knows of their existence, which I can get through an API call, iterating in discrete steps.
The utility I created pulls a batch of these keys, then iterates over them one by one to determine if I want to make a call to the service to pull the whole record (I don't need to if the key was already called before or previously analyzed on a previous run of the program.)
Seems relatively simple, but this is a big database and I'm going to be running this for a long time. Also, these servers are in the same network, so the connections are pretty fast and furious...if I overestimate some capacity, I'm going to really hammer the servers and the last thing I want to do is create an internal DDoS.
To that end, this utility keeps a number of running stats using structs that are updated by the various goroutines. To keep things synced up, I use an embedded lock on the structs.
(Yeah, that's a neat feature...works like this:)
type stctMyStruct struct { sync.Mutex intCounter int }
After that, it's a simple matter of instantiating a struct and using it.
var strctMyStruct stctMyStruct strctMyStruct.Lock() strctMyStruct.intCounter = strctMyCounter.intCounter + 1 strctMyStruct.Unlock()
Because the utility is long-running and I wanted to keep tabs on different aspects of performance, I had several structs with embedded mutexes being updated by various goroutines. Using timers in separate routines, the stats were aggregated and turned into a string of text that could be printed to the console, redirected to a file or sent to a web page (I wanted a lot of options for monitoring, obviously.)
At some point I introduced a new bug in the program. My local system was relatively slow when it came to processing the keys (it's not just iterating over them...it evaluates them, sorts some things, picks through information in the full record...) and when I transferred it to the internal network, the jump in speed accelerated exposure of a timing issue. The program output...and processing...and web page displaying the status of the utility...all froze. But the program was still running, according to process monitoring.
I first thought it was a race condition...something is getting locked and not releasing it. But how can I tell if a routine is blocked by a locked struct? Golang does not have a call that will tell you if a mutex is locked, because that would lead to a race condition. In the time it takes to make the call and get the reply, that lock could have changed status.
Okay...polling the state of mutexes is out of the question. But what isn't out of the question is tracking when a request for a lock is granted.
First I changed the struct to have an addressable member for setting the state of the lock.
type stctMyStruct struct { lock sync.Mutex intCounter int }
Next I created some methods for the struct to handle the locking and unlocking.
func (strctMyStruct *stctMyStruct) LockIt(intLine int) { chnLocksTracking <- "Requesting lock to strctMyStruct by line " +strconv.Itoa(intLine) tmElapsed := time.Now() strctMyStruct.lock.Lock() defer func() { chnLocksTracking <- "Lock granted to strctMyStruct by line " + strconv.Itoa(intLine) + " in " + time.Since(tmElapsed).String() }() return } func (strctMyStruct *stctMyStruct) UnlockIt(intLine int) { chnLocksTracking <- "Requesting unlock to strctMyStruct by line " +strconv.Itoa(intLine) tmElapsed := time.Now() strctMyStruct.lock.Unlock() defer func() { chnLocksTracking <- "Unlock granted to strctMyStruct by line " + strconv.Itoa(intLine) + " in " + time.Since(tmElapsed).String() }() return }
LockIt() and UnlockIt() methods are now added to instances of stctMyStruct. When called, the function first sends a string into a channel with a dedicated goroutine on the other end digesting and logging messages; the first acts as a notification that the caller is "going to ask for a change in the mutex."
If the struct is locked, the operation will block. Once it is available, the function returns, and in the process runs the defer function which sends the granted message down the channel along with the elapsed time to get the request granted.
How does it know about the line number?
There's actually a library function that can help with that; my problem is that it returns too much information to not be a little unwieldy. To get around that, I created a small wrapper function.
func GetLine() int { _,_,intLine, _ := runtime.Caller(1) return intLine }
If you look at the documentation you can get the specifics of what is returned, but Caller() can unwind the stack a from a call by the number of steps you use as an argument and return the line number, package/module, etc...in my particular case I'm using one source file so I only needed the line number.
Using this, you can insert function calls to lock and unlock the structs as needed. I added the methods to each struct that had a mutex or rwmutex. Using them is as simple as:
strctMyStruct.LockIt(GetLine())
This solution provided a way to trace what was happening, but there is a performance cost. Defer() adds a few fractions of a second each time it's called and I used a lot of locks throughout the program which added up to a significant performance hit. Using this technique is good for debugging, but you have to decide if you want to incur the overhead or find a way to compensate for it.
So what was my lock issue?
I set the goroutine monitoring the locks to dump information to a file and traced the requests vs. granted mutex changes. There was a race condition in a function I used that summarized aggregated information; A lock near the beginning of the summary was granted, and while pulling other information, it requested another lock. The second one was an operation on a struct that was held by a process waiting to get a lock on what was being held by the beginning of the summarize function.
It was a circular resource contention. Function A held a resource that Function B wanted, and Function B had a resource function A wanted. The solution was to add more granular locking, which added more calls but in the end meant (hopefully) there would be only one struct locked at a time within a given function.
Lesson learned: when using locks, keep the calls as tight and granular as possible, and avoid overlapping locks as much as possible or you may end up with a deadlock that Go's runtime wouldn't detect!
Friday, March 2, 2018
On The Importance of Planning A Program
I'm not a professional programmer.
I'm not sure I could even qualify as a junior programmer.
What I have been doing is programming at a level that is above basic scripting, but below creating full applications. I've been churning out command line utilities for system activities (status checking and manipulating my employer's proprietary system, mostly, along with a bevy of Nagios plugins) with the occasional dabbling into more advanced capabilities to slowly stretch what I can accomplish with my utilities.
That said, I've been trying to reflect on my applications after they've been deemed "good enough" to be useful. In a way, I try running a self-post-mortem in hopes of figuring out what I think works well and what can be improved.
I was recently in a position where I had to create a utility, then months later, got permission to rewrite it, giving me a unique opportunity to take an application that had a specific set of expectations for output and let me refactor its workflow in hopes of improving performance and information it gathered in the process.
For reference, the 10,000 foot view is that I have a large set of data from a large database, and we wanted to dump the contents of that database, using an intermediate service providing REST endpoint API calls, to save each record as a text file capable of being stored and uploaded in another database. A vendor-neutral backup, if you will...all you need is an interpreter that is familiar with the text file format and you could feed the contents back into another service or archive the files offsite.
It seems like this would be a small order. You have a database. You have an API. The utility would get a set of records, then iterate over them and pull records to save to disk.
Only...things are never that simple.
First, there's a lot of records. I realize "a lot" is relative, so I'll just say it's in the 9 digits range. If that's not a lot of records to you, then...good on you. But when you reach that many files, most filesystems will begin to choke, so I think that qualifies as "a lot."
That means I have to break up the files into subdirectories, especially if the utility gets interrupted and needs to restart. Otherwise filesystem lookups would kill performance. Fortunately there's a kind of built-in encoding to the record name that can be translated so I can break it down into a sane system of self-organizing subdirectories.
Great! Straightforward workflow. Get the record names. Iterate to get the record contents. Decode the record name to get a proper subdirectory. Check if it exists. If not, save it.
Oh, there are some records that are a kind of cache...they are referred to for a few days, then drop out of the database. No need to save them.
Not a problem, just add a small step. Get the record names. Iterate to get the record contents. Check if it's a record we're not supposed to archive. If we are, decode the record name to get a proper subdirectory. Check if it exists. If not, save it.
During testing, I discover there are records whose records cannot be pulled. The database will give me a record name but when I try to pull them, nothing comes back. That's odd, but I add a tally of these odd names and a check is inserted for non-200 responses from the API calls.
Then there are records that I can't readily decode. They're too short and end up missing parts for the decoding process. At first I write them off as something I have to tally as an odd record in the logs, but discover that when I try pulling them, the API call returns an actual record. I take this to the person who has institutional knowledge of the database contents and after examining the sample of records, states that it looks like the records were from an early time in the company history.
Basically, there's a set of specs that current records should follow, but there are records from days of yore that are valid but don't follow the current specs.
So there are records that should be backed up...but don't follow the workflow, where I have functions that check for record validity through a few tests before going through the steps of making network calls and adding to the load on the servers acting as intermediaries for the transfer. To fix this, I insert a new pathway for processing those "odd" records when they're encountered, so they end up being queried and translated and, if they are a full record, saved to an alternative location. The backups are now separated into the set of "spec" records and another "alternative" path.
The problem is that this organic change cascades into a number of other parts of the utility; my tally counts for statistics are thrown off. The running list of queued records to process have to take into account records that are flowing into this alternative path. Error logging, which also handled some tallying duties since it was an end-of-life for some of the records to be processed, weren't always actually errors but actually a notification that something had happened during the process that was helpful during tracing and debugging but a problem when it would mark certain stats off before the alternative record was processed.
That one organic change in the database contents during the history of the company had implications that totally derailed some of the design of my utility that took into account only the current expected behavior.
In the end, I lost several days of debugging and testing when I introduced fixes that took into account these one-offs and variations. What were my takeaways?
It would be simple to say that I should have spend some days just sketching out workflows and creating a full spec before trying to write the software. The trouble is that I didn't know the full extent to which there were hidden variations in the database; the institutional knowledge wasn't readily available for perusing when it resides in other people's heads, and they're often too busy to try coming up with a list of gotchas I could watch out for in making this utility.
What I really needed to do was create a workflow that anticipated nothing going quite right, and made it easy to break down the steps for processing in a way that could elegantly handle unexpected changes in that workflow.
After thinking about this some more, I realized that it was just experience applied to actively trying to modularize the application. The new version did have some noticeable improvements; the biggest involved changing how channels and goroutines were used to process records in a way that cut the number of open network sockets dramatically and thus reduce the load on the load balancers and servers. Another was changing the way the queue of tasks was handled; as far as the program was concerned, it was far simpler to add or subtract worker routines in this version than the previous iteration.
I'd also learned more about how to break down tasks into functions and disentangle what each did, which simplified tracing and debugging. Granted, there are places where this could still have been improved. But the curveballs introduced as I found exceptions to the expected output from the system, for the most part, just ate time as I reworked the workflow and weren't showstoppers.
I think I could have definitely benefited from creating a spec that broke tasks down and figured out the workflow a bit better, along with considering "what-ifs" when things would go off-spec. But the experience I've been growing in my time making other utilities and mini-applications still imparted improvements. Maybe they're small steps forward, but steps forward are steps forward.
I'm not sure I could even qualify as a junior programmer.
What I have been doing is programming at a level that is above basic scripting, but below creating full applications. I've been churning out command line utilities for system activities (status checking and manipulating my employer's proprietary system, mostly, along with a bevy of Nagios plugins) with the occasional dabbling into more advanced capabilities to slowly stretch what I can accomplish with my utilities.
That said, I've been trying to reflect on my applications after they've been deemed "good enough" to be useful. In a way, I try running a self-post-mortem in hopes of figuring out what I think works well and what can be improved.
I was recently in a position where I had to create a utility, then months later, got permission to rewrite it, giving me a unique opportunity to take an application that had a specific set of expectations for output and let me refactor its workflow in hopes of improving performance and information it gathered in the process.
For reference, the 10,000 foot view is that I have a large set of data from a large database, and we wanted to dump the contents of that database, using an intermediate service providing REST endpoint API calls, to save each record as a text file capable of being stored and uploaded in another database. A vendor-neutral backup, if you will...all you need is an interpreter that is familiar with the text file format and you could feed the contents back into another service or archive the files offsite.
It seems like this would be a small order. You have a database. You have an API. The utility would get a set of records, then iterate over them and pull records to save to disk.
Only...things are never that simple.
First, there's a lot of records. I realize "a lot" is relative, so I'll just say it's in the 9 digits range. If that's not a lot of records to you, then...good on you. But when you reach that many files, most filesystems will begin to choke, so I think that qualifies as "a lot."
That means I have to break up the files into subdirectories, especially if the utility gets interrupted and needs to restart. Otherwise filesystem lookups would kill performance. Fortunately there's a kind of built-in encoding to the record name that can be translated so I can break it down into a sane system of self-organizing subdirectories.
Great! Straightforward workflow. Get the record names. Iterate to get the record contents. Decode the record name to get a proper subdirectory. Check if it exists. If not, save it.
Oh, there are some records that are a kind of cache...they are referred to for a few days, then drop out of the database. No need to save them.
Not a problem, just add a small step. Get the record names. Iterate to get the record contents. Check if it's a record we're not supposed to archive. If we are, decode the record name to get a proper subdirectory. Check if it exists. If not, save it.
During testing, I discover there are records whose records cannot be pulled. The database will give me a record name but when I try to pull them, nothing comes back. That's odd, but I add a tally of these odd names and a check is inserted for non-200 responses from the API calls.
Then there are records that I can't readily decode. They're too short and end up missing parts for the decoding process. At first I write them off as something I have to tally as an odd record in the logs, but discover that when I try pulling them, the API call returns an actual record. I take this to the person who has institutional knowledge of the database contents and after examining the sample of records, states that it looks like the records were from an early time in the company history.
Basically, there's a set of specs that current records should follow, but there are records from days of yore that are valid but don't follow the current specs.
So there are records that should be backed up...but don't follow the workflow, where I have functions that check for record validity through a few tests before going through the steps of making network calls and adding to the load on the servers acting as intermediaries for the transfer. To fix this, I insert a new pathway for processing those "odd" records when they're encountered, so they end up being queried and translated and, if they are a full record, saved to an alternative location. The backups are now separated into the set of "spec" records and another "alternative" path.
The problem is that this organic change cascades into a number of other parts of the utility; my tally counts for statistics are thrown off. The running list of queued records to process have to take into account records that are flowing into this alternative path. Error logging, which also handled some tallying duties since it was an end-of-life for some of the records to be processed, weren't always actually errors but actually a notification that something had happened during the process that was helpful during tracing and debugging but a problem when it would mark certain stats off before the alternative record was processed.
That one organic change in the database contents during the history of the company had implications that totally derailed some of the design of my utility that took into account only the current expected behavior.
In the end, I lost several days of debugging and testing when I introduced fixes that took into account these one-offs and variations. What were my takeaways?
It would be simple to say that I should have spend some days just sketching out workflows and creating a full spec before trying to write the software. The trouble is that I didn't know the full extent to which there were hidden variations in the database; the institutional knowledge wasn't readily available for perusing when it resides in other people's heads, and they're often too busy to try coming up with a list of gotchas I could watch out for in making this utility.
What I really needed to do was create a workflow that anticipated nothing going quite right, and made it easy to break down the steps for processing in a way that could elegantly handle unexpected changes in that workflow.
After thinking about this some more, I realized that it was just experience applied to actively trying to modularize the application. The new version did have some noticeable improvements; the biggest involved changing how channels and goroutines were used to process records in a way that cut the number of open network sockets dramatically and thus reduce the load on the load balancers and servers. Another was changing the way the queue of tasks was handled; as far as the program was concerned, it was far simpler to add or subtract worker routines in this version than the previous iteration.
I'd also learned more about how to break down tasks into functions and disentangle what each did, which simplified tracing and debugging. Granted, there are places where this could still have been improved. But the curveballs introduced as I found exceptions to the expected output from the system, for the most part, just ate time as I reworked the workflow and weren't showstoppers.
I think I could have definitely benefited from creating a spec that broke tasks down and figured out the workflow a bit better, along with considering "what-ifs" when things would go off-spec. But the experience I've been growing in my time making other utilities and mini-applications still imparted improvements. Maybe they're small steps forward, but steps forward are steps forward.
Saturday, January 13, 2018
Regulations and Dieting (and Surgery)
This is a few thoughts that involve something common in the new year; dieting. Well, tangentially diet related.
Part of the issues I've had cascade down in the past few months...thanks life!...has led to appointments with the rather new bariatric unit at the local hospital unit. They take a whole-in approach of using a team of nutritionists, fitness experts, gastric surgeons, psychologists...the whole nine yards...to create a program with support system for patients.
Part of the intake process meant reviewing your history. This is where I learned something nifty (beyond this machine that weighs you while zapping you with a current that measured all sorts of density information regarding the different kinds of body fat and densities in your body to come up with a profile of good and bad stuff in your body).
They asked about my past history and I told them about the gastric bypass procedure I underwent many years ago...I believe it was around 2009. April. Somewhere in there. My memory is fuzzy.
At the time, the local hospital system didn't really have a bariatric unit. While they very much seemed to support the idea that if you're fat, most of your illnesses and afflictions were weight-based and you needed to lose weight to deserve to be better, they were not well known for their "let's cut parts of the digestive system apart to help lose weight."
There was another hospital, about an hour away from us, that did have a small bariatric surgery unit. They took me into the program, agreed to do the surgery if I lost X amount of weight first, and after reaching that milestone I had the surgery.
Not long after, during the latter phases of physical recovery, I unceremoniously discovered that not only did my surgeon retire, but the hospital killed their bariatric surgery program. There was no notice. There was no letter, no email, no announcement ever reached us. Just...nothing. No more appointments kept.
I soured on the medical system a little more at that point. There was emphasis on how important a support system was...and there is certainly no shortage of continued feeling that when a doctor looks at you, your weight is first a foremost on their mind when figuring out how much a person is worth.
One day I had a consult about something at the local hospital and they mentioned the bariatric surgery, and how I could get followup at the other hospital.
"We can't," I said. "They shut down their bariatric unit."
"They restarted it a little while ago," they said.
Turns out, with little (read: no) fanfare or notification, they revived their bariatric unit. I have no doubt the doctors I worked with are gone; my surgeon had retired, and I can't imagine the younger doctors stuck around once their specialty had been shut down.
This came at a time when fat people were becoming (medically) profitable. Oh, sure, we're still a huge expense in cardiac care (and in this time the local hospital became a leader in cardiac care), but now some of those costs are being recouped through insurance companies through growing sleep apnea care, diabetes drugs and bariatric surgery. What was justification for treating people as sub-human was becoming a PR race to open the best fat-care centers, which before was the market for hucksters and easy diet schemers on television ads.
In other words, upon hearing that the other hospital had re-opened their bariatric unit without any announcement to former patients, I figured it was because it was becoming fashionable and probably profitable to do so. I certainly didn't trust them to give a damn, though. They didn't notify their old patients about it. They expressed no damns about my status. So...screw them.
The annoying thing is that the local hospital decided to focus more and more money into developing a local bariatric/weight loss program. As time went on they moved more staff into specializing on weight care. They repurposed a building just for weight loss. They focused resources on their weight loss center.
But when the topic of weight loss came up with my appointments, the moment my surgery history came up it was suggested I drive another half hour to the other hospital and continue care there.
It was during intake that I finally found out why. During the consult they mentioned something about checking the size of the stomach pouch, as it was obvious I could eat more than I was supposed to be able to. My history came up, and she said something about going to the other hospital.
I recounted my history and my distaste for dealing with a hospital that made it so blatant they didn't give a damn about their patients. She said that she could talk to the surgeon in the local hospital's weight clinic, but she knew what he'd say...no, he wouldn't work with me on it. That was when I learned why.
The government made rules.
See, to make hospitals "accountable" (that's a big buzzword for hospitals now, not just schools!) they were getting evaluated based on patient followup. In this example, I was operated on by hospital A. They had a program they wanted to end, and they did...essentially dumping their patients.
I ended up going to hospital B, my preferred hospital for most medical issues since I only went to A for a procedure B refused to do at the time. But this means that if anything was bariatric-related, B was getting (federally) evaluated for my poor outcome. At some point it seems A was pressured to re-open their bariatric program and make available their resources to old and new patients (although they didn't advertise it...take that as you will.)
That was why I was repeatedly "encouraged" to go to another hospital for some weight treatment followups. It's also why I'm not able to access certain resources at a hospital that in the years following my surgery dumped not insignificant resources into developing a "cutting edge" bariatric unit.
Once again the government is interfering in efforts they don't understand. Or at a minimum lots of hands in the pot have created a system that benefits not the patient, but some other interests, with the net effect of screwing the patient.
In the end I still have to go through their weight clinic, just with some options limited. I get to begin the new year miserably tracking calorie counts and using words like "carbs" and "abs" and "veggies," and dealing with the neuroses that I know will flare up while pursuing the accurate tracking of goals.
Will I be successful? Will I find more reason to distrust and/or outrightly dislike the hospital? Or will I fail miserably? Time will tell. But if you'll excuse me, I have to go prepare a big old egg patty with...egg. Lots of protein. Minimal carbs. Low calorie!
I really miss food.
Part of the issues I've had cascade down in the past few months...thanks life!...has led to appointments with the rather new bariatric unit at the local hospital unit. They take a whole-in approach of using a team of nutritionists, fitness experts, gastric surgeons, psychologists...the whole nine yards...to create a program with support system for patients.
Part of the intake process meant reviewing your history. This is where I learned something nifty (beyond this machine that weighs you while zapping you with a current that measured all sorts of density information regarding the different kinds of body fat and densities in your body to come up with a profile of good and bad stuff in your body).
They asked about my past history and I told them about the gastric bypass procedure I underwent many years ago...I believe it was around 2009. April. Somewhere in there. My memory is fuzzy.
At the time, the local hospital system didn't really have a bariatric unit. While they very much seemed to support the idea that if you're fat, most of your illnesses and afflictions were weight-based and you needed to lose weight to deserve to be better, they were not well known for their "let's cut parts of the digestive system apart to help lose weight."
There was another hospital, about an hour away from us, that did have a small bariatric surgery unit. They took me into the program, agreed to do the surgery if I lost X amount of weight first, and after reaching that milestone I had the surgery.
Not long after, during the latter phases of physical recovery, I unceremoniously discovered that not only did my surgeon retire, but the hospital killed their bariatric surgery program. There was no notice. There was no letter, no email, no announcement ever reached us. Just...nothing. No more appointments kept.
I soured on the medical system a little more at that point. There was emphasis on how important a support system was...and there is certainly no shortage of continued feeling that when a doctor looks at you, your weight is first a foremost on their mind when figuring out how much a person is worth.
One day I had a consult about something at the local hospital and they mentioned the bariatric surgery, and how I could get followup at the other hospital.
"We can't," I said. "They shut down their bariatric unit."
"They restarted it a little while ago," they said.
Turns out, with little (read: no) fanfare or notification, they revived their bariatric unit. I have no doubt the doctors I worked with are gone; my surgeon had retired, and I can't imagine the younger doctors stuck around once their specialty had been shut down.
This came at a time when fat people were becoming (medically) profitable. Oh, sure, we're still a huge expense in cardiac care (and in this time the local hospital became a leader in cardiac care), but now some of those costs are being recouped through insurance companies through growing sleep apnea care, diabetes drugs and bariatric surgery. What was justification for treating people as sub-human was becoming a PR race to open the best fat-care centers, which before was the market for hucksters and easy diet schemers on television ads.
In other words, upon hearing that the other hospital had re-opened their bariatric unit without any announcement to former patients, I figured it was because it was becoming fashionable and probably profitable to do so. I certainly didn't trust them to give a damn, though. They didn't notify their old patients about it. They expressed no damns about my status. So...screw them.
The annoying thing is that the local hospital decided to focus more and more money into developing a local bariatric/weight loss program. As time went on they moved more staff into specializing on weight care. They repurposed a building just for weight loss. They focused resources on their weight loss center.
But when the topic of weight loss came up with my appointments, the moment my surgery history came up it was suggested I drive another half hour to the other hospital and continue care there.
It was during intake that I finally found out why. During the consult they mentioned something about checking the size of the stomach pouch, as it was obvious I could eat more than I was supposed to be able to. My history came up, and she said something about going to the other hospital.
I recounted my history and my distaste for dealing with a hospital that made it so blatant they didn't give a damn about their patients. She said that she could talk to the surgeon in the local hospital's weight clinic, but she knew what he'd say...no, he wouldn't work with me on it. That was when I learned why.
The government made rules.
See, to make hospitals "accountable" (that's a big buzzword for hospitals now, not just schools!) they were getting evaluated based on patient followup. In this example, I was operated on by hospital A. They had a program they wanted to end, and they did...essentially dumping their patients.
I ended up going to hospital B, my preferred hospital for most medical issues since I only went to A for a procedure B refused to do at the time. But this means that if anything was bariatric-related, B was getting (federally) evaluated for my poor outcome. At some point it seems A was pressured to re-open their bariatric program and make available their resources to old and new patients (although they didn't advertise it...take that as you will.)
That was why I was repeatedly "encouraged" to go to another hospital for some weight treatment followups. It's also why I'm not able to access certain resources at a hospital that in the years following my surgery dumped not insignificant resources into developing a "cutting edge" bariatric unit.
Once again the government is interfering in efforts they don't understand. Or at a minimum lots of hands in the pot have created a system that benefits not the patient, but some other interests, with the net effect of screwing the patient.
In the end I still have to go through their weight clinic, just with some options limited. I get to begin the new year miserably tracking calorie counts and using words like "carbs" and "abs" and "veggies," and dealing with the neuroses that I know will flare up while pursuing the accurate tracking of goals.
Will I be successful? Will I find more reason to distrust and/or outrightly dislike the hospital? Or will I fail miserably? Time will tell. But if you'll excuse me, I have to go prepare a big old egg patty with...egg. Lots of protein. Minimal carbs. Low calorie!
I really miss food.
Subscribe to:
Posts (Atom)