Wednesday, May 31, 2017

Programming a Stargate

I've really loved using the Go language. Part of my exploration and tinkering has involved side projects where I'd pull information from outside sources, usually websites, and parse the response for the information I'm looking for.

I always try to be a good citizen for web scraping; I pull the minimum information I need, close connections once I get the response, insert delays between multiple page views, etc. I always try to put only as much load on a service as a regular user would when web browsing.

"What does that have to do with Stargates?"

I really like Stargate. SG-1, Atlantis, or Discovery, doesn't matter (except the animated series...I pretend that doesn't exist.)

Some people hate it when geeks watch movies and get nitpicky about details. "CAN'T YOU JUST ENJOY THE MOVIE?!"

Not always, no. When I enjoy something, I'm the type of person who enjoys not just the story, but the universe in which it is set; this means learning about the feasibility of that story universe. Oh, sure, there are some rules you have to accept in order for that story to work (such as faster than light travel magic handwaving or using lightsabers and not having them vaporize anything too close to the wielder since, you know, REALLY HOT PLASMA...)

One of the key bits to Stargate involves using the Stargate; the dial home device for Earth's portal was not found with the gate. The device can, however, be manually "dialed", which is what SG command does...they have a computer control massive motors that sets each of the chevrons into a lock position, as well as reading diagnostic signals from the gate.

The show handwaves a lot of this process away, but I think it's implied that someone had to program the computer to attempt dialing control and reading (and sending) signals to control the gate. It's a black box; they needed to figure out "If I do X, do I get Y?" and more importantly, "Do I get Y consistently?" (Then maybe figure out what Y means. I mean, you're screwing around with an alien device that connects to other worlds, after all...) I like to think about what it took for that person to approach that black box and coax information out of it in a way that was useful.

Getting information from these websites, designed for human interaction using a web client, is like trying to programmatically poke a stargate. In the process I've discovered that our many websites are frustrating and inconsistent (I sometimes wonder, when I just want to get a list of text to parse, how many common websites are compliant for devices used by people with poor eyesight or braille systems.)

For example, I tried looking at a way to query the status of my orders from a frequently used store site. I thought it would be simple...log in and pull the orders page. Nope. If you order too many items, you might have to query another page with more order details. Sometimes order statuses change in unexpected ways. The sort order of your items isn't always consistent, either. And those were the simpler problems I encountered...figuring out consistency in delivery estimate

I tried a similar quick command line checker for a computer parts company. Turned out they had far more order statuses than I thought they did, and alerting me to changes in that order status was an interesting exercise in false alarms when they'd abruptly change from shipped to unknown and back again.

Another mini-utility I worked on was checking validity of town locations. Pray you never have to work with FIPS...

The website I chose seemed to be fairly consistent in the format of the information. Turns out I was naive in how various towns are designated, and this website was not internally consistent in showing information in a particular order. I get all sorts of interesting but very weird results for different areas around the country.

I'm sure that if I had a dial-home device (in this case, a clear API to the websites or access to an internal database) these lookups would be more straightforward. As it stands, the closest API I can use is the same as anyone with a mouse and keyboard...parsing the web page.

While frustrating at times, I am thankful that these mini-projects have taught me a few things.

  • Websites, some of which I've routinely used, are not as standardized as I thought within their own site. I just hadn't noticed when I'm searching for particular information the items I click on to get what I'm searching for.
  • I end up rethinking a lot of parsing logic when digging and sorting through human language.
  • Web sites implement some seemingly convoluted logic for interacting with clients and I now have a new appreciation for web browsers.
  • I also have a new appreciation for the usefulness of a good API. If I start a business and there's anything that can be exposed through API, I'm making it available through an API.

Saturday, April 29, 2017

Learning By Creating Support Applications

Not long ago I started a job with a company whose primary product is a very custom application that is comprised of many smaller interoperating applications. Without getting into too much detail, the applications communicate through various APIs, many of which are not well documented.

(What follows are thoughts that are not focused solely on the new employer, but rather a set of experiences I've gathered over the years from several jobs and interactions with others in the technology field. In other words, this isn't about the current employer. It's a conglomeration of experiences, and it's my own opinion. Just figured I'd have to clarify that...)

As a company focuses on growth, there comes a time when maintenance and monitoring is moved to staff that are dedicated to those tasks so the developers no longer have to do triple duty; for the new hire tasked with pioneering that position, gathering statistics to get a feel for the behavior of their systems over time, and taking care of regular maintenance and basic troubleshooting is very daunting when there is little (or no) documentation available outlining how to get the necessary metrics for gauging the health of the system.

And it isn't just a lack of documentation that acts as an obstacle. When a software-based company is first conceived and grows, it's natural for the programmers to work on getting the product into a usable, testable state. This means overcoming problems as they arise and focusing on results, not laying framework for delegating future operations.

That fosters institutional knowledge. The more of your system that is developed in-house, the more information future maintainers must glean about your system without help of outside references. Sites like Serverfault can help when you're trying to figure out why a new deployment of Nginx won't work, but it won't be useful when a log contains output from a Java application Bob, three desks away, wrote while debugging a particular reply encountered from another subsystem's API response.

Small companies with a small number of developers may feel it is inconvenient to be interrupted by the new person's constant questions about why application A is dependant on application B, or how application C discovers a service status on server 3. As a new hire, I feel a little hesitant to approach others with these types of questions, preferring to try looking for answers through other means before taking someone else's time.

(In my opinion, if the answer is to check the source code from the repo and read that to get the answers, you may as well have hired a new programmer; recognizing a need for someone dedicated to operating and maintaining your system outside the coterie of coders is a sign that there may be a need to dedicate time to documenting and tooling the application for non-programmer use.)

How can a new hire get a grasp on this situation?

In this case, I've been writing a series of Nagios plugins specifically configured to pull metrics from the various subsystems in the company application. There are cases where I thought a simple task was actually more nuanced that first appeared; each time, I ended up discovering something more about the operation of the system, and I made sure it was documented for later reference.

Each time there's a failure case, I would make a note and start work on a new monitor so we'd know about it in the future. These monitors didn't just collect a snapshot of the current state of a service, it would gather some metric that was then sent to a database and from there plotted on a graphing application for performance monitoring.

The current product relies on database performance; some queries behave different from others, where some are straightforward and others require processing of filters. Some of my checks measure response times.

Others are querying API endpoints for replies of what the services believe are their current health states.

Some queries are pulling the status of database indexing.

In cases where the application is exposing information through Java beans, my plugins are pulling numbers from JMX and checking for values within established expectations.

In other cases, plugins are checking for the existence of files that are supposed to be regularly updated and when certain records are updated in the database.

Each of these plugins, once finished and deployed, are being documented for operation in a way that when new people are hired he or she should be able to easily find a list of how these work and gather indirect information on some aspects of the in-house application operation without programmer-level institutional knowledge.

In the case of my new position, I've gained a higher respect for the value of meta-applications in gaining insight on how a complicated system works. Having information written out or explained to you is enlightening, and I never feel that documenting how something works is a waste of time. But until you find yourself executing on that knowledge, I'm not sure you really understand the subject. Creating support applications that meaningfully interact with the system pushes knowledge into the realm of wisdom the way reading about the science of flight comes alive after building your first remote control plane.

When confronted with the task of comprehending the colossal, try learning about the limited first with applications that monitor and interact with small aspects of the system. Not only will others benefit with the support applications, but you'll benefit with the mental exercise and in the end have a better model of how everything works!

Thursday, March 23, 2017

Golang: Remember This When Using Select For Monitoring Channels

I thought I would share something that's easy to overlook when using a loop to listen for messages from channels in Go.

The following is a simple code snippet:

for {
    for a := range structOfChannels {
        select {
        case msg := <- structOfChannels[a].Chan1:
            // Something
        case msg := <- structofChannels[a].Chan2:
            // Something
        default:
        }
    }
}

All this is doing is rotating over a series of channels for a message and processing them. The default case makes sure the loop doesn't get stuck after the first iteration of "select", and the for without conditions means continue until eternity.

I noticed, when running Activity Monitor (this was using Go 1.8 on OS X) that the processor would stay near 100%. The system seemed responsive, but the processor staying that high was, to me, annoying.

The solution was simple; make the loop wait a fraction of a second each iteration.

for {

    tmTimer := time.NewTimer(time.Millisecond * 50)
    <-tmTimer.C

    for a := range structOfChannels {
        select {
        case msg := <- structOfChannels[a].Chan1:
            // Something
        case msg := <- structofChannels[a].Chan2:
            // Something
        default:
        }
    }
}

This just makes the loop wait 50 milliseconds before ranging again, a pause smaller than most humans would perceive but enough for the computer that it dropped the processor use to near nothing. 

There are a few other approaches that work, but have a similar effect. For example, if you're worried about the overhead of creating and freeing the NewTimer(), you could create a NewTicker() outside the for{} scope and keep reusing that. You can also probably lower the milliseconds to smaller values and see where the processor starts kicking up, but I'll leave that to the reader to experiment and tune.

The point is, because the system seemed responsive, it was easy to overlook the effect of a simple for{} loop used to monitor messages from goroutines and there's a possibility this could have an effect when deploying to servers. Check your performance when testing your work!

Monday, March 6, 2017

When To Use "+" And When To Use "%20": Adventures In URL Encoding

I've been working on some Go-based utilities to interact with a website application written in Java. Part of this involves, in many cases, encoding database queries that are submitted to API endpoints on the Java application and interpreting the returned results.

In the process I learned something new about encoding easier to read/more human-like strings to encoded strings for the server. Namely, the standards seem broken.

I jest, but really the trouble was a matter of "it works if you know specifically how to make it work for this case."

My workflow would involve a Curl command line from a coworker with a library of working queries he had scripted out for use in other situations. I'd take that and translate it into the utility or Nagios plugin I was writing.

I took the string used in the Curl sample and feed it into Go's url.QueryEscape(), then send it to the database endpoint with

req, err := http.NewRequest("GET", strURL+"?q="+strQuery, nil)

...which promptly spit back a syntax error. Huh?

A little digging later and I found that there are standards defining an encoded space can be either "+" or "%20". And it wasn't necessarily clear when each was acceptable, and different languages varied in the strictness of their interpretation of standards.

The first red flag here is that language encoding libraries implement these changes differently, but I still felt kind of stupid at first not knowing what I was doing wrong. My self-flagellation eased a bit when I saw this was even a bug report for Go. It didn't go anywhere in terms of changing things; the language still encoded spaces as pluses and not percent-20's, but it at least acknowledges that I'm not the only one scratching my head why it wasn't working as expected.

A more elaborate answer was found on Stack Overflow. It wasn't the top answer, although each gave some elaboration on the issue, but the best one boiled the situation down to the existence of different standards for what part of the URI was being used, and backwards compatibility meant that %20 is generally safest to use but technically the %20 should be used before the ? in a URI and + be used after the ? for denoting a space.

In my case, Go liked the + for escaping strings and eschewed the percent-20. My fix? Right after running the url.QueryEscape():

strQuery = strings.Replace(strQuery, "+", "%20", -1)

Not the most elegant, but when I submitted that strQuery, the Java application was happy!

My takeaways:
1) Something I thought was simple...feeding a string to an escape function for encoding properly to a URI format...isn't necessarily straightforward. If you have trouble, find out if your application is expecting pluses or %20's for spaces.
2) Computers are binary...it works or it doesn't. But implementations of standards are still influenced by people, and languages (and libraries) are implemented by people, so even given the constraint of binary...people still make things more complicated in practice.
3) Given the confusion of + versus %20 when searching around online, I'm not the only one having this kind of issue.
4) Just use %20. Unless I run into a specific case where the other side isn't translating %20 correctly.

Wednesday, December 7, 2016

Creating a Test Ensemble with ZooKeeper and VirtualBox

What is ZooKeeper? It's an Apache project creating a server that allows distributed information and message storage for distributed processes. If you have a number of servers that need to coordinate certain information, ZooKeeper might be useful, especially if your project uses Java.

The interface is very reminiscent of a simple filesystem, using "znodes" as files that can have sub-znodes containing more information. In addition to passing and storing small bits of information (less than a megabyte per node, as I recall), znodes can also be set to ephemeral, so when a server connects to the ZooKeeper ensemble (cluster), a znode is registered and other servers can find that server. When the server goes offline, the znode disappears, so your application can be informed (if it sets a watch on that znode) if that server is no longer available to the cluster or it can search for the available servers (znodes) before connecting to that system.

That's the simplest overview.

I created 3 ZooKeeper nodes using VirtualBox.

I first installed 1 Ubuntu-Server VM. I named it Cluster1 for the VM name and hostname, used Ubuntu Server 64 bit with version 16.10. The VM had an 8GB drive (sparse drive so it didn't eat all the space on my workstation right away), 1GB RAM and bridged ethernet.

While running through the install I added the SSH server package when prompted.

Once the VM was running I ran
sudo apt-get update
sudo apt-get upgrade

I also ended up running
sudo apt-get dist-upgrade

At that point it no longer had packages to update nor packages held back.

I shut down the VM and told VirtualBox to clone it. The first was named Cluster2 and the second clone became Cluster3. During the clone wizard step-through I told VB to reinitialize the MAC addresses on the network cards and do a full clone so these are independent VMs.

I fired up Cluster2 and changed the hosts file and hostname file in /etc to reflect the fact that the machine is cluster2, not cluster1, then repeated the step for Cluster 3. A restart of the two machines should now show the proper names for the machines.

Now I have 3 small servers running. In each of them, I ran
sudo apt-get install zookeeper

I edited the /etc/zookeeper/conf/myid file so cluster1 had the value 1, cluster2 had the value 2, and cluster3 had 3. In the /etc/zookeeper/conf/zoo.cfg file, I added the IP's for each of the three machines reflecting the 1,2, and 3 values, like so (just for the specify zookeeper servers section):
server.1=192.168.254.1:2888:3888
server.2=192.168.254.2:2888:3888
server.3=192.168.254.3:2888:3888

I used the IP's for each server because I didn't edit any hosts file or local DNS to allow finding these ZooKeeper servers by name, although it could certainly be done. On the other hand, using the IP means no DNS lookup, so I might have shaved a few milliseconds off communications.

The default install didn't have any service scripts, so "service zookeeper restart" leaves Ubuntu scratching it's head at you. Install some add-on scripts using:
sudo apt-get install zookeeperd

At this point I can run
sudo service zookeeper restart
sudo service zookeeper status

A basic ensemble (or cluster) should now be running!

How do you test this...or at least do something with it? There's a Java CLI tool included with ZooKeeper, but it turns out there's a bug where a particular environment variable isn't set. It's not a big deal...just set it before trying to run the tool.
export JAVA=java

Now you can run the tool. This will launch it, and connect to a local server instance.
/usr/share/zookeeper/bin/zkCli.sh -server 127.0.0.1:2181

From here, you can use the "help" command to get a list of available commands. To just kick the tires a little, I ran these commands:
ls /
create /zk_test My_Data
ls /
get /zk_test
set /zk_test test
get /zk_test
delete /zk_test
ls /
quit

And as I ran through the list of commands (creating the zk_test znode, seeing the data stored as the string "My_Data", setting the data to "test", and finally deleting the znode) I would list and set information from different VMs to see that the data was synchronizing properly.

Thursday, November 10, 2016

Time Machine With File Vault Corruption: Reformatting the External Drive

I've already written one post that went into detail about reformatting an external drive that acted as my Time Machine backup for my Mac. External USB drives can be bumped and the cables loosened, raising the chances that data corruption will occur; encrypted drives are really cranky when that happens.

In my case, reformatting and starting over is acceptable for recovering and getting the system backed up again as long as there isn't any indication that the hardware itself is failing.

This time around any time I tried accessing the data was met with failure; from the diskutil command line utility, it seemed that there was an encrypted volume being found and mounted by the system (even though it wouldn't appear in Finder nor in Disk Utility...Disk Utility kept hanging with a spinning beach ball and wouldn't show any drives mounted at all when it launched...). Diskutil seemed to show the existence of an unlocked encrypted volume on my Time Machine drive, but it had no data, and attempts to reformat the drive using the command line returned an error to the effect of "resource busy."

After several attempts to read the volume I decided to try just obliterating the data on the drive by wiping as many sectors as I could at the beginning of the drive. Unfortunately, attempting to do that returned a resource busy response. A daemon on OS X was trying to access the drive, and that prevents direct access to the device.

But there is a way around it.

Use this command to identify your external drive (the Time Machine drive, in my case)

diskutil list

Su to root.

sudo su

Disconnect the drive from the USB port. Then use the up arrow and enter key to repeat this command when plugging in the drive again. The goal is to execute this command just after the system sees it, but before the auto-mount daemon tries to be helpful and prevent your access...replace "disk1" with with the correct disk number found with the diskutil command above. Triple check that you have the correct drive. If you overwrite the wrong drive you will be very unhappy and it's not my fault.

cat /dev/random > /dev/disk1

This will overwrite data on the drive with random gibberish. Once enough sectors...like the partition table...the drive will be seen as ready to be formatted by OS X. This process is not going to give much feedback...and since the drive is large, it could take forever to complete. I advise letting it run for several minutes then use control-C to abort the command.

At this point I used Disk Utility to format the drive, then opened Time Machine to remove the previous backup drive and re-add the "new" one.

And yes, I re-encrypted it. I'd rather not let the backups be readable by others, despite the risk of corruption wiping the backup, and creating a new set of backups took only about a day to complete.

Tuesday, October 25, 2016

Apple Remote Desktop (ARD) Can't Find Machines

One of ARD's more entertaining tricks is "forgetting" machines on the network. I'm still not sure what triggers this, but it certainly is among the more annoying behaviors to crop up.

There are a couple of sites that mention this kind-of sort-of trick to kick ARD in the head, but I thought I'd make a note here for my own quick reference in one place. These notes should work on El Capitan (10.11) and Sierra (10.12).

Summary: Remove cached settings from ARD, remove network DNS/ARP caches on machine, kick ARD in the head...

  1. In ARD, go to the All Computers list, highlight the machine names and delete them.
  2. Quit ARD.
  3. Flush DNS cache: sudo dscacheutil -flushcache;sudo killall -HUP mDNSResponder
  4. Flush ARP: sudo arp -ad
  5. Kick ARD in the head by restarting the ARD agent (on clients): /System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Resources/kickstart -restart -agent

Start Remote Desktop again and re-scan the network. Because the clients were removed, attempting to view/connect may require you to re-enter credentials.

Also keep in mind I noted testing this on El Capitan and Sierra. Another annoyance with OS X releases is that the syntax/procedure for flushing DNS changes alarmingly often, so it may take some Googling if your release is different.

The last note I have is that if this doesn't work, check that a network hiccup didn't force the client's wireless to shrug its shoulders and give up, meaning that the actual problem all along was that the client was not able to be remotely managed over the network all along.

Whoops.

Wednesday, September 21, 2016

Skittles are to Refugees what M&Ms are to Not All Men

Side note: I can hardly believe how long it's been since I've added to this blog...but it looks like several months have flown by since my last entry. I guess I took an impromptu blog break while I was heads-down on some personal Go programming projects. Amazing how something can expand to fill your spare time activities...now I have programming plus personal issues to nudge me into remedying my blog hiatus status...

This entry is not a Go-related topic. This, instead, is an entry about a Presidential candidate's campaign assertion that, when I heard about it, felt eerily familiar.

Donald Trump, Jr. used the recent bombings in Chelsea and New Jersey to compare refugees to a bowl of perhaps-poisoned candy. Basically, the argument goes, if you have a bowl of Skittles and 3 of them were poisoned, would you take a handful?

The makers of Skittles were not amused, as you can imagine. Their reply simply asserted that Skittles are candy while refugees are people, so they did not believe the analogy was proper (side note: did you know Skittles is owned by Wrigley Americas? I thought they were known for gum...)

Leaving aside the argument that the suspect in the bombing is a naturalized American citizen or that the actual math behind your odds of dying at the hands of terrorism in the US are minuscule compared to heart disease, being struck by lightning, car accident or, in the US, being shot, hearing this tweet make the rounds in the usual social media reminded me of another "would you want to risk <eating a large amount of innocuous, common food> if you knew there was a <tiny but acknowledged number> that were deliberately fatal?" meme, only for the opposite, pro-social justice argument. It wasn't hard to uncover it.

Apparently the Trump campaign was resurrecting the old "Not All Men" argument that used M&M's, instead of Skittles, in response to the idea that not all men are terrible, so please don't overgeneralize about all men being <murderers || rapists || chauvinist pigs || etc>. It seemed to make sense...they acknowledge that not all men are terrible, but all you're doing is derailing the actual point by trying to deflect on focusing on the population of people that were good instead of the very real danger of the significant population of men doing bad things. I had forgotten that meme...and only realized now that it seems to have largely disappeared from the social media rounds. Or perhaps I had simply stopped paying attention to the waves of regurgitating hive-thoughts posing as original thought...

Thanks to the anti-Trump sentiment, though, this iteration of the poisoned candy argument didn't last long before a rebuttal made the rounds. Now the small-population-of-poison-in-the-patch argument is linked to anti-semitic material from Nazi Germany. In the heartwarming story Der Giftpilz, Jews are compared to mushrooms in the forest, where there are good people and good mushrooms, and there are bad people and bad mushrooms, and bad mushrooms can kill whole families...so you have to be vigilant against poisonous Jews killing your family. The author, Julius Streicher, was executed as a war criminal.

Oversimplifying to the point of overgeneralization (ironically, in the case of what I'm about to say) is rarely, if ever, effective when analyzed. It is a propaganda tool; a way to get eyeballs with a headline without actually having a headline. In the cases here, these were used as tools to manipulate people using what seemed, at first thought (and rarely a second thought applied) logical, sound reasoning. It takes more thought to understand the nuances of the actual issues involved...and these shortcut-think-phrases are simply a way to appeal to lazy supporters of side X, and to possibly deflect from the actual goal or reasoning behind a movement or idea.

In the case of the poisoned candy, if you're told there are definitely, say, 3 poisoned items in there, of course reasonable people are not going to eat them (or in some variants, feed them to their kids.)

Of course it ignores that candy are not individual people with complex, nuanced personalities.

It ignores that a reasonable person has little reason to believe that any candy are poisoned in your average bag of bulk candy.

Or that the actual odds...the math that we, as human beings with minds poorly wired to think in terms of mathematics and statistics,...are nowhere near the same for dying from terrorism as they are for "3 of <a bowl> of candy" are for killing you, unless the bowl were perhaps a swimming pool or you apply the analogy to something purposely vague so every jackass making a sexist or unwanted comment to a passing stranger counts as a poisoned candy.

It also ignores the ethical motivations of the rest of that candy bowl...that they're people, searching for safety, fleeing a war they had nothing to do with, and the vast majority want nothing more than to live their lives in reasonable safety.

And it ignores the possibility that the candy is loaded with sugars that contribute to the diabetes and heart disease that are more likely to kill you than terrorism despite the "good" label applied to them.

And it certainly doesn't acknowledge that there is no binary "safe vs. unsafe" activity in life. I often wondered this when a religious person would talk about the evils of gambling...isn't life a gamble? You're getting out of bed without thinking that the shift in blood pressure could trigger a stroke, and taking a morning shower without acknowledging that you could slip and fall and crack your head. The act of taking a number two can strain your heart and cause a heart attack. Eating a meal can cause you to choke to death. There is no %100 safe activity for which you're not betting that you'll be okay performing a relatively common thing, and if gambling is the act of wagering on an uncertain outcome, life is filled with uncertain outcomes.

In the end, I'm not indicting the social movements that led to these memes. My post is an indictment against the type of thinking that leads people to treat these thought-bites as if they were entire arguments for or against an idea instead of the bullet points they really are; we are a culture that mistakes sound bytes and headlines for actual news items, when the actual story requires actual research of some depth to even begin to understand and empathize with.

Worse, we have so much information, so many sound- and thought-bites begging for our attention that people (and media outlets) treat stories like the recent Angelina Jolie and Brad Pitt divorce filing as something more deserving of headlines than a gossip-column footnote.

Perhaps this is also a reflection of how people process information; perhaps before, we didn't have the technology to indulge in sifting through a plethora of visual clickbait and having the luxury of ignoring nuance. Or perhaps people have always been full of uninformed opinions, but now we are graced with social media giving a voice by which to proclaim these ideas. How much we are shaping our information and media tools versus how much we are shaped by them is an exercise for philosophers and time to measure.

Unfortunately I can't pretend to be above the influence; I can only try to acknowledge that it happens and try to limit the degree of validity I assign to the resulting fallacies. The best thing I've done is limit my exposure to social media, and even popular news outlets. I've gradually cut things out that others take for granted; as satellite (and cable) TV grew more expensive and we tried to cut bills, we stopped watching TV (and I am still amazed at how little tolerance I have for commercials as a result). I configured Twitter to dump tweets directly to Facebook, eschewing having to sift and post there in order to update virtual relatives and friends of life events and thus limiting the amount of regurgitated cruft from the FaceBook timeline that inevitably led to a "Here's a Snopes article that had you spared 5 minutes to Google you'd have known what you just said was pure crap" reply.

So take a minute and reflect on the true meaning of a soundbite. What is the truth behind it? What is the possible true motivation behind the meme? And most of all, why are you willing to support, or fight, that meme?

Sunday, May 1, 2016

GoRoutines: Are They a Tree, or Independent?

I was working on a side project when I ran into a question regarding goroutines spawning goroutines; if you have spawn a goroutine from main() (I'll call it Offspring1), and Offspring1 spawns a goroutines called Offspring2, then Offspring1 returns(), what happens to Offspring2?

Does it die, like pruning a branch off a process tree?

Or does Offspring2 keep running?

I wrote a small test application to find out.

The Test:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
package main

import (
 "fmt"
 "time"
)

var chanRunner2 = make(chan string)
var chanRunner1 = make(chan string)
var chanStop1 = make(chan bool)

func main() {

 a := time.NewTimer(5 * time.Second)
 b := time.NewTimer(10 * time.Second)

 go Runner1()

 for {
  select {
  case <-a.C:
   chanStop1 <- true
  case strMessage := <-chanRunner1:
   fmt.Println(strMessage)
  case strMessage := <-chanRunner2:
   fmt.Println(strMessage)
  case <-b.C:
   fmt.Println("DONE!")
   return
  default:
   continue
  }
 }
}

func Runner1() {

 go Runner2()

 c := time.NewTicker(500 * time.Millisecond)

 for {
  <-c.C
  select {
  case <-chanStop1:
   return
  default:
   chanRunner1 <- "Howdy from Runner1!"
  }
 }
}

func Runner2() {

 d := time.NewTicker(500 * time.Millisecond)

 for {
  <-d.C
  chanRunner2 <- "Hello from Runner2!"
 }
}

Like my previous "let's test a theory" test applications, this one is pretty straightforward. There are two functions; Runner2(), whose only job is to create a ticker that ticks every 500 milliseconds and when that tick fires it sends "Hello from Runner2!" to a channel called chanRunner2.

Runner1() is just like Runner2(), except it first spawns Runner2() before it starts firing a slightly different message into a channel called chanRunner1 every 500 milliseconds. There is one other small addition; Runner1() listens to a channel called chanStop1 and if anything comes down the pipeline, it calls return.

Then there's main(); main() creates two timers (not tickers), one that will fire in 5 seconds and one that will fire in 10 seconds. Main() then spawns Runner1() and starts a loop listening for either a timer to fire or a message from channels chanRunner1 or chanRunner2, with a default of "continue" so the select statement keeps re-evaluating in a loop.

Expected Output:

Because of the nature of goroutines and the tickers (not timers...there's an important difference...) the output should be "Howdy from Runner1!" interspersed with "Hello from Runner2!". After 5 seconds, the first timer fires, and Runner1() calls return; either both lines stop writing to the console because Runner1() returns and kills Runner2() with it, or "Hello from Runner2!" continues for the next 5 seconds without the other message interleaved, meaning that you can kill the routine that created another goroutine without having any effect on the "grandchild" goroutine to main().

Actual Output:

Drumroll, please...

./chained_goroutines
Howdy from Runner1!
Hello from Runner2!
Howdy from Runner1!
Hello from Runner2!
Howdy from Runner1!
Hello from Runner2!
Howdy from Runner1!
Hello from Runner2!
Howdy from Runner1!
Hello from Runner2!
Howdy from Runner1!
Hello from Runner2!
Howdy from Runner1!
Hello from Runner2!
Howdy from Runner1!
Hello from Runner2!
Howdy from Runner1!
Hello from Runner2!
Hello from Runner2!
Hello from Runner2!
Hello from Runner2!
Hello from Runner2!
Hello from Runner2!
Hello from Runner2!
Hello from Runner2!
Hello from Runner2!
Hello from Runner2!
Hello from Runner2!
DONE!

There it is; Runner2() kept running after Runner1() exited. Something to keep in mind when modeling how your application works!

Wednesday, April 13, 2016

Athens School Board's Response to the 2016 Threatened Strike; Disingenuous Again

The teacher's Union has finally...after over 3 years of negotiation stalemate...threatened a strike.

The School Board responded pretty much as I thought they would; they quickly put up a rope and aired their version of the dirty laundry. And just as relevant, they blamed the teachers for making the britches dirty even though it's their crap on the backside.

The Board posted, on the school website (which still pisses me off as the Union can't post information to the school website for their side of the story, but the School Board likes to post propaganda to the front page of the district site, making it harder for the public to actually piece together a coherent picture of what's going on if they are so inclined), their "Responsnse from the AASD school board.pdf." (The typo is theirs; I thought it amusing, so I pointed it out, although they may change it later).

A werd frum ower sponser

They start off painting the teachers as evil, untrustworthy and shifty. In their response, the Board used phrasing such as, "We would like to add that at no time has the Athens Area School Board negotiation team members met at the table, unwilling to negotiate" to insinuate that they are the victims of a vicious Union (although phrasing is kind of important, since this sentence could be read as they simply never met at the table...). They say, "We have not put unrealistic timelines or demands on the AAEA, while that has not been the case by the AAEA." That's strange, given that the opening of the paragraph states they've had 3.5 years with no meaningful movement in negotiations.

After setting up those dominoes, the Board points out with evident self-satisfaction that the Union reneged on their official statement from November 2015 that stated they would allow a deadline of four weeks after a state budget was passed for a reasonable settlement to be proposed.

The state budget was officially allowed to lapse into passing on March 27th, and here we are with a threatened strike STARTING ON APRIL 18th! THOSE LYING UNION BASTARDS! They even ended the paragraph with an unsubstantiated claim that they offered to meet for negotiations but the AAEA "simply would not meet with us." In summary, "You can clearly tell we're victims of these horribly unreasonable jackbooted thugs." You can almost picture them cringing as the teachers march into the room in full uniform regalia, drooling in anticipation of crushing the kindhearted and well-intentioned School Board under their collective bargaining heels.

And they're right! The Union did take a strike vote in less time than promised. However, the Board failed to emphasize that the ultimatum was for a reasonable proposal and show an attempt to bargain in good faith. That's really a weak ultimatum. It's like telling your kid they better not hit their sibling again or you'll maybe punish them. Reasonable proposal. Show an attempt. And how did the Board react?

Yeah, seems reasonable

The Union stated publicly at a school board meeting on April 12th, when questioned about this decision, that the negotiation team felt the Board was refusing to further negotiate after their last session as a member of the School Board’s negotiation team declared, “We have nothing more to talk about.”  At that point, the Union’s executive team made the decision to ask their members to vote on a strike. 

The next bit of the Board’s response is an outline of goals to establish how reasonable the Board is. Three are pretty clear-cut. The last one is fluff, as there's absolutely no way to measure it, but makes them sound like they care about something important. Be allowed to hire the best professional staff members? After they stated goals that equate to trying to save money and lower pay, during a three and a half year standoff? You don't want the best. You want the most naive new recruits, too inexperienced to know that you're beating them in the head with a switch from a whackin' tree while you're telling them you're doing them a favor. Pro tip: don't start off highlighting why you're being browbeaten for 3+ years by an evil Union regime while bravely fighting to reduce your teacher's benefits and pay only to end it by saying you're trying to recruit the cream of the crop to work for you.

Next, they decide to roll into the biggest issue, the nearest and dearest to the taxpayer heart—teacher salaries. Teachers are too expensive to hire! The Board repeatedly presses to not give retroactive pay (after over three years of refusing to actually settle the contract, pretending that when this one is eventually passed they won't have to immediately settle the NEXT CONTRACT because they couldn't do their job...) and lower the pay increases due when teachers increase their experience/education levels. They do this by appealing to the public's basic grasp of math, because nuance is hard.

TEACHERS MAKE $66,000 A YEAR ON AVERAGE! THE AVERAGE BRADFORD COUNTY INCOME IS $48,000! HOW FAIR IS THAT?! They even published a table of teacher salaries; they were kind enough to omit the names, but it wasn't really much of a kindness. Teacher salaries are public knowledge. While the table they provide is semi-anonymized, the data has enough information to combine with the links to the (slightly out of date) data for public teacher salary records to figure out who is who, with a bonus of now knowing their employee identification numbers used in internal business records. So, yay for more "here's how you phish for data" handed out.

Holy shit. That sweet Bill Gates paycheck must be why the teachers be rollin' in to the parking lot with diamond-encrusted Lambo's and gold-trimmed Porsches. Sounds pretty bad. Why do they get so much when I don't!? The Board doesn't link to the document from which they pulled the numbers, but mention it's from the US census.

They weren't lying, but they weren't entirely truthful. Here's a handy link to the census information at Census.Gov. It's kind of weird that the Bradford County household income is $48,000, but the US average is $53,000 and Athens Township has an average income of $51,700. But I guess the $48,000 statistic paints a more outrage-inducing picture.

But is that the whole picture? Probably not, considering that income tends to be tied to education level. Teachers are required to have ongoing education credits. Basically the government tells them they need continued schooling or some equivalent (One of the step items the Board wants to not pay them extra for having attained) in order to retain certification. Nearly 88% of Bradford County aged 25+ (and 90% of Athens Township) have high school degrees or higher. But only 17% of Bradford County (and 23% of Athens) has a bachelor's degree or higher! 

And in Athens, the major industries are...hospitals...the school...and...what? Most of your business booms are fast food, Wal-Mart and new hotels. At least, those are the visible new jobs. City-data.com says Athens' most common industry is manufacturing (27%) and the most common occupations are production- and construction- related (15% and 12%, respectively.) Knowledge workers with higher degrees seem to leave the area.

But of the educated, what are their average incomes? The Board is comparing a large population of mostly non-degreed members with teachers, who not only have at least a Bachelor's degree, but are required to continue with education in a rather specialized niche. It's not uncommon for teachers to end up with master’s degrees or higher. They're almost forced to.

The Bureau of Labor Statistics says the median weekly earnings of a person with a Bachelor's degree (2015) is $1,137. There's 52 weeks a year, so that comes to a little over $59,000/year. Strange...that's not too far from the teacher's salaries. Master's degrees earn about $1,341/week, or $69,732/year.

That means the teacher income average in Athens, at $66,000/year, is well within the "average" mark. A high school diploma average is $678/week or $35,256/year, for what it's worth.

I suppose the board would say that the $66,000 is still significantly higher than the average for having a Bachelor's degree. Let's take a quick glance at the data in the tables they published online to illustrate how overpaid their semi-anonymized teachers are.

Interesting how the median looks like a middle finger
Most of the staff were hired between 6 and 18 years ago! I count 27 people hired before 2000, 52 employees were hired between 2000 and 2010, and 18 from 2010 to 2015. Four of the 27 were hired in the 80's! There's a significant number of experienced staff! (Note I figured this up by hand from the chart the board provided. I may have a miscount. Feel free to double check, let me know if I missed something in the comments...)

Remember that bit the Board claimed about wanting to hire the best? Unless your job makes you bitter you usually get better at your job with additional experience.

Unfortunately that's the garlic to the Board's vampire. These are educated professionals; a significant number of them have decent experience, and have been getting continuing education. That means they're going to be closer to the upper pay limit both because they've been there a while AND they have paper saying they're smarter.

In other words, you're paying for people who are better. And the Board doesn't want to pay them. I wouldn't be surprised if part of that spike in 2013 through 2015 is comprised of inexperienced graduates...they're cheaper.

And that's what the board is focused on. Teachers are expensive. Cut them down at all costs. 


The Board’s information about the teacher work day is also misleading. Teachers are obligated to work 7.5 hours a day with a half hour duty free lunch (SLACKERS!)

The language in their response is kind of funny. "Only work a 7.5 hour day." The 40-hour workweek, when I last checked, was comprised of 5 8-hour days. Even McJobs are required by law to give you 30-minute lunches when you work over five hours in a single shift. Plus breaks. The board is complaining that the teachers aren't obligated to work the hours of an hourly fast food worker.

Well, not quite. The board goes on to complain that in addition to the 2.5 hours of lunchtime they get a week, teachers are allowed 3.75 hours per week of self-directed time. THEY WERE EVEN ALLOWED TO GO TO WALMART OR THE BANK. It's like the inmates are running the asylum. I'm pretty sure at one point the Board proposed adding instructional time by having staff wear diapers to save trips to the bathroom. (That 3.75 hours was roughly 45 minutes a day. According to most teachers, this time was usually used to correct papers and prepare for another class period. They weren't watching Netflix or leaving en masse to get more adult diapers from Walmart every day, although I can see why the Board would be horrified that teachers would run an errand while stores were open.)

To further illustrate how unreasonable teachers are, the Board said they REFUSED to work an additional 15 minutes a day without being compensated. What they still refuse to acknowledge is that teachers already work this additional time. They're just not contractually obligated to do so during the school day. The Washington Post reported that the Bill & Melinda Gates Foundation along with Scholastic issued a report showing that the average teacher works 10 hours and 40 minutes a day. Yes...obligated to work 37.5 hours a week, but actually on average, working 53 hours a week.

From their web page:

The 7.5 hours in the classroom are just the starting point. On average, teachers are at school an additional 90 minutes beyond the school day for mentoring, providing after-school help for students, attending staff meetings and collaborating with peers. Teachers then spend another 95 minutes at home grading, preparing classroom activities, and doing other job-related tasks. The workday is even longer for teachers who advise extracurricular clubs and coach sports —11 hours and 20 minutes, on average. As one Kentucky teacher surveyed put it, “Our work is never done. We take grading home, stay late, answer phone calls constantly, and lay awake thinking about how to change things to meet student needs.”

To my knowledge, the Board has never acknowledged this. In fact, they have previously attacked teachers for being overpaid while using only the contractually obligated work time as their measure. This extra time is no secret among teachers, and it's nothing new. Just doing the basic math for hand-grading a two-page report for 30 kids in one class can take a significant chunk of time from an English teacher; if you assume 5 minutes per report (which is an unrealistic deadline to begin with), it’s 150 minutes, or 2 hours and 30 minutes. For one class. What does the Board think is going on during that "self directed" time? Teacher disco hour?

It's at a point where the Board would have to be purposely playing stupid to not know how this factors in. The requirements put on teachers, with homework correction load and prep, makes accomplishing what needs to be complete within the time allotted a joke. It's as if you were tasked with moving a giant mound of sand from point A to point B; you are going to be paid to do it in an hour. The job has to get done, or you're fired and not paid for the job. But you will only be paid for an hour of work...even though it takes two hours to accomplish. Yet, teachers still accept this as part of the burden of working as a teacher even as the Board makes it abundantly clear that they're either clueless about what it takes to teach, or they simply enjoy making the work environment miserable.

It’s laughable that the Board insists they "...understand the importance of its professional staff to remain lifelong learners" before moving on to propose eliminating some salary benefits to continuing education along with a "if you leave within 4 years of tuition reimbursement you have to pay it back" and a cap on reimbursement spending. Does the Board not understand what they're saying there? "We know this is important. So we'd like to limit it in many ways, along with adding financial uncertainty by forcing you to pay back the education you're required to get if something happens where you leave our employ AND only some teachers can continue their education at a time.” How do you negotiate with this kind of cognitive dissonance?

Then they start winding down with some mini-zings, the bits that don't all seem to make much sense as points of contention unless they are actually put into context. Eliminating the transfer clause that allows seniority to be factored into filling vacant positions? What's the problem there? Not much, except it pretty much is meant to allow administrators and the Board to place favorite hires into new positions and add pressure to get rid of the expensive experienced teachers, assisting in eliminating positions by attrition (see the number of recent hires? Just speculating...)

Most of the Board's response (or "resposnse", which still makes me giggle) is disingenuous at best. They even claim, "There have been 2 independent fact finder reports completed in the last year. Both reports were rejected by the AAEA." Those unreasonable Union bastards!

Those unreasonable Union bastards...wait, what the hell?

Um...that's kind of awkward. It pretty clearly says that the fact finder report was yet to be voted on by the Union for acceptance when the Board already rejected it. October 8 of 2015. Unanimously. It's available on the labor relations board website, by the way.

But if you were to just read the Board response, the rejection was all on the Union. Funny how a simple Google search shows that insinuation is utter crap.

The next page had a listing for an article from June 2014 when the board once again rejected a fact finder's report (and this one pointed out the teachers accepted the report.)

And for all the calls for saving money, the Board seems oddly bent on wasting money in other areas. For example, they recently spent $15,000 on a study that told them they were wasting $800,000 on transportation. There's bound to be some variability in spending...but $800,000? It's kind of an amazing article to read. And the report, too.

The Board is also cutting two checks to lawyers. This has a bill from John Audi (Sweet, Katz, & Williams) in January for nearly $6,000. (also one to a doctor, Sidney G. Ranck, Jr., for $1,200...he's in obstetrics and gynecology. That's kind of...disturbing?)

This bill has John Audi getting a check cut for nearly $3,000. And this one is another $6,500 check, along with their second lawyer, Pat Barrett, getting a check for $6,000. The list goes on.

And this is in addition to the acting superintendent's $130,000 salary (strange that seems to be missing from the salary list the Board is holding up as evidence that teachers are overpaid...)

The interesting part of that salary is that the previous superintendent was getting about $122,000/year. The acting superintendent isn't qualified to be superintendent and he's getting paid more. Part of me wonders if it's a gender thing...but that would be speculation. He's literally not qualified. The Board is paying for him to take classes and get his certification. That's why he's an acting superintendent. The previous superintendent was hired away from another district and had several years of experience. The new one is making more money and doesn't have a certificate. Somehow the Board equates this with hiring the best staff as per their resolutions back in January of 2013.

Overall the whole "response," in my opinion, is one long exercise in misleading the public. Take the claims that they have been open to bargaining this whole time in good faith with a grain of salt. They claim to care about the community and educating the kids, but their actions demonstrate, quite loudly, otherwise.

ADDENDUM

I did some quick checking of how much the frugal school board is spending on lawyer's fees. These figures were taken by eyeballing the board bills found on the school website. These reports are in PDF format, making them really really difficult to process in an automated fashion. Since I couldn't process them automatically I may have missed some payments, so the numbers I have, assuming I didn't misread some line items, would be considered a minimum paid to two law firms over the past 3 years, meaning there are probably payments missing. I think the contract talks may have extended beyond what bill items are on the website.

Regardless, this kind of money is interesting given how much the Board speaks of money problems and how expensive teachers are. It's also interesting how much vested interest the legal counsel has in prolonging the contract talks. How many meetings are there for negotiations? How much are they making per meeting?


The numbers are all there, listed with dates the checks were cut. Feel free to double check my numbers and tell me if I'm missing something. Also, I'm aware that the solicitor for the board (P.B.) does other duties, so these are not funds spent only on fighting the Union; I'm not privy to the other duties, however, so I can't break down the numbers into sub-categories. I've been told the John Audi firm was hired just to fight the Union, however, so big numbers are still big numbers.

Update 4-17-16

One of the sticking points in contract negotiations is in regards to the number of consecutive personal days a teacher may take. But it strikes me as being rather odd...what do they hope to accomplish by limiting consecutive personal days when teachers only get 3 personal days per year?

The AAEA (Union) provided an answer with a FaceBook post.


A Board member wanted to "talk" (usually situations like this implies "complain", but given a lack of specific information, that again is speculation) with a teacher. Teacher was on vacation. School board member now just happens to be pushing for limits on teacher time off.

If the implication is true the push to limit personal time off is purely for personal reasons, not for the benefit of the community taxpayers. This is a vendetta as a negotiations sticking point. How many other points of negotiation are driven by purely personal reasons?