Apparently Jezebel, a website that I guess is targeted towards...feminist audiences?...published an article stating that "selfies," photos people take of their own faces and posted online, are less about empowerment and more about a cry for help. This has also apparently drawn a lot of Internet ire, as it became a trending topic on the Twitterz and inspired a huge number of people to prove it isn't a cry for help by posting more selfies, thereby proving...well, I think it really proved nothing. It probably made them feel better about themselves, though. Taking a stand, or something like that.
Large parts of the article seemed to make sense, or at least confirm my preexisting biases. Which for me is the same as confirming that large parts of the article are correct.
Selfies seem to be a popular form of validation on social media. I mean, why else would you post a picture of yourself making a stupid duckface?
And there seems to be a particular set of demographics that are more inclined to doing it. Hint; you find them as werewolves in Twilight and fans of Twilight.
Some things I do get; I get the images taken of some kind of accomplishment. Hey, I just graduated! I just passed the Bar! I'm drunk off my ass in a bar!
These are images that are about something more than just yourself; you're trying to convey something of meaning or worth remembering. I can relate to that.
But the image of just yourself sticking your face in a camera lens to put online? What is it trying to say or convey?
"It doesn't have to say or convey anything! I'm just having fun! I'm being MEEEEE!"
That is more along the line of what the article said was narcissism. It's being posted for some kind of validation. "Aren't I pretty?" or "Don't I look great?!" They boil down to, "Look at MEEEE!"
This isn't a direct criticism of the behavior; it's our culture now. We love the idea of being validated by others. It's what social media is largely about. We post things in blogs hoping others will see it and tell us it's useful or great or that we're right, validating our opinions and feelings. We post crap to Facebook looking for likes or positive comments. We post things on Twitter and count retweets as affirmations that people love us.
Selfies are just another way of vying for attention.
So I'm puzzled when people blow up at this article. Sure, I disagree with the notion that selfies are a way to validate the idea that women are living up to a societal norm of what's beautiful; and I disagree with some of the characterization of what is and isn't a selfie. But it's not something to turn into a social cause or stir up righteous indignation.
I'm also puzzled when people say that selfies are "empowering." Empowered to do what? Is this a buzzword hijacked as shorthand for some cause that I've totally missed the point to? Usually empowerment means you have the authority to do something. I don't know how a teenager taking a duck photo of themselves or a guy showing how ripped he is at the gym equals authority to do anything.
There are also people who say they take these pictures because they don't fit social norms and don't care what others think of them. Which is great...but why'd you share the picture? I mean, isn't the purpose of showing something like that to seek feedback? Or are you expressing yourself in a way that elicits feedback while trying to show people you're such a badass you don't want that feedback? I just don't get it.
The only explanation I can come up with to reconcile the amount of ire drawn by this article is that people are overly sensitive to the framing of the idea; they object to the negative connotation of narcissism, or the idea that they're seeking validation in their behavior. Naturally they show how wrong this idea is by posting selfie pictures in droves with others, tagging the pictures so it trends as a Twitter topic with the hundreds of other outraged selfie-takers, in a totally not-ironic group behavior.
It's possible I'm being overly narrow in my definition of a selfie. The article had as an example a group of women in the Marines posting a picture of themselves after completing infantry training. I don't think of it as a selfie; it's a group of friends, or colleagues, or...whatever they consider themselves as a group of people having gone through and successfully overcome a big obstacle as a team. Just because your face is in it doesn't mean it's a "selfie." It's a memory to look back on.
But for people itching for a social issue to fight over, especially if they view something as criticizing something they themselves do, technicalities don't need to stop them from firing up the hate wagon.
This is simply another topic I chalk up to, "I just don't get it..." and I doubt I'm going to get any level headed, rational explanations any time soon. I'm open to explanations. But when it comes to teenagers posting pictures just because "I feel awesome!," you'll have a hard time convincing me the motive isn't a hope to have lots of people chime in and agree that yes, they do indeed look awesome, and anyone moved to disagree is asking to be dismissed as a "hater," another bit of Internet-emergent memedom that is equally vapid in popular meaning.
Do you have a better explanation for the meaning or motives behind a selfie? Is it something more than just begging for attention in a culture that prides itself on shallow attention for the sake of attention? Or is it as the article mentioned from yet another article, a way for women and younger girls to self-promote, seeing as boys are encouraged to self promote while women are being held back from doing so?
Saturday, November 23, 2013
Monday, November 11, 2013
Personal Programming Projects, Not Always Simple
I discussed my personal project a few times recently involving playing with a programming project and taking a break from it. Much like writing a story, stepping away from a project for a period of time can help someone gain a little perspective.
My own project, without getting into too many details since it's still embarrassingly amateur, is at a point where it seems to work "well enough" that it could serve as a simple demo. In the process of getting to this point I've made a few observations.
One, if you lack a good spec that maps specifically what features you want and how to use the application, you'll probably end up with cruft that does little more than takes up space. I made a pass at the source code to clear up some test calls and experimental stuff I threw in. One function was called once, and it carried a "test" notation. The function itself described what the function did but not why I put it in there. I think I at one time had an idea for the program to do something that this function would help with later, but at a point after the application took shape the scope of my project kind of changed.
Two, it's probably a good thing to periodically go through and clean up extra crap from the code. It's a lot less to search through and trace when something goes wrong or wonky.
Three, just because it compiles and seems to work doesn't mean the impostor syndrome will fade. I suppose that by acknowledging there's a possibility I am working under the shade of the impostor syndrome umbrella I'm not actually experiencing it, but that's a rabbit hole I don't feel like pursuing here. Lacking experience in the field contributes to the feeling of frustration and inadequacy; I don't know if a problem I run into is similar to something my colleagues would run into or if it's a rookie mistake. In one example I was running a rough bit of approximation math to figure out if a couple of controls overlapped, then felt hugely stupid for not thinking sooner that there would be a method built into the libraries for detecting the collision of the controls.
Google told me there was a collision detection method. I experimented a bit, but it seemed that it didn't work properly, or at least not in the way I expected. Narrowed down Googling suggested that these particular controls don't do collision detection; the proposed solution was the math route I was first pursuing.
Four, keep track of your development work, for a reminder if nothing else. Maybe more experienced developers don't need to bother doing this, but I find a personal journal to be handy sometimes. Nothing too elaborate. I just go back to remind myself of what I've done and how long it's taken to do it. Days blur together over time...the list of what I've managed to do in my sessions helps jog my memory and keep me from thinking I've accomplished nothing.
Five, keep track of your issues and goals. I was keeping paper notes before, but now I've opened a Trello board to create notes. Think of a feature to add? Create a card for it. Find a weird behavior? Create a card for it. I simply created lists to classify the issues...features, quirks, etc...and then shuffle the cards around as tasks are completed.
Trello is a list of lists, not really meant for user as a bug or feature tracker. But for my needs keeping track of issues as a series of handy lists is adequate. Maybe professionals have a better way of achieving these type of tasks, but I'm not working on a team nor working on a giant project. Do other programmers or teams use Trello for tracking software projects, or particular aspects of software development?
Six, there's more than one way to achieve a goal, but figuring the "right" way takes experience. At least, I think it does. I've read conflicting accounts of the "right" way to design the logic flow of a program; what should go into a function? How big should functions be? What should go into a separate file? Are there certain practices that work for a single- or small-team program that others would cringe if they saw it?
I recently read of an analysis of firmware used in an embedded controller that apparently ignores most of what would be considered "best practices," yet was...is...in use in a number of cars on the road. Presumably this code was written by people who are experienced programmers.
I've also read more than one case of coders eviscerating other coders work as inept or incompetent.
Is there a set formula or best practice for doing this? Or is it a learning through experience thing? Or in the end, does it just not matter as long as it all compiles and runs as intended?
Seven, programming can be difficult, and there's a learning curve to even using the tools that makes this more difficult. A very long time ago I created elementary programs by typing line by line into the memory of a Commodore 128, TRS-80 or Apple II. Later on I typed lines into a text editor, creating a single document from which an application would run (or compile, depending on the language.)
Even later on I could create a large text file that contained most of what I needed to compile into an application. Certain libraries or adjunct modules could be created in extra text files and compiled in with includes.
Today's version of Visual Studio is simply overwhelming to the uninitiated. More confusing is the fact that you don't even need to touch three quarters of the features and still create a working elementary program.
Discoverability is remarkably difficult, despite great features like Intellisense to auto-complete what I'm typing, I was unsure how best to figure out if these particular controls were overlapping; there's a method that seemed that it should work, but it doesn't. Despite an integrated environment that looks like it was ripped from a futuristic movie, I am still scouring the Internet for example code to figure out how to do what I'm looking to do.
Some parts of the code seem to work as if by magic; it's most likely a deficiency in experience, but sometimes tracing what is happening at what point in the execution of the application is difficult, despite everything in the IDE meant to help with debugging and tracing the execution path.
That's what I miss from using simpler tools and text editors. There seemed to be less magic involved in debugging as well as understanding how and why things worked in the process; I know that if I were involved day in and day out with Visual Studio things would make more sense, but as someone using it sporadically, it's still amazingly complex. I liken it to being able to make a car start, go, and stop, but having no idea how to use the A/C, stereo or even the cup holders.
Once I can finish the personal project, I'm moving into playing with Rails. Ah...a text editor to create a usable application? Let's see if that suits me better.
Eight, your small project is not immune to feature creep. This may be related to not having a complete spec created beforehand, but as my own project evolved, I find myself not wanting to show it until it has all the side features implemented. Most of them would probably not even be visible to the end user, unless something goes wrong.
It could be argued that some of the items I'm looking to implement aren't really features, but rather bug fixes. "This works well enough, unless the network doesn't exist...it needs to check and throw an error to the user if that's the case" is a little different from adding a way for the user to control what color a font is. In this context I simply have added conditions to check for before being willing to release this as a 1.0 version for the scrutiny of others.
Nine, I'll never be comfortable showing my work. Anyone can come up with excuses for why something sucks. Working around people who are very good at something, and not working in a learning or apprenticeship capacity, makes the prospect of showing something to others quite intimidating. Primarily because I have no illusion that this is great work.
Then again, knowing that this could be ripped apart with questions of, "Why on Earth did you do this?" and, "How did this compile?" as a feared inevitability also implies that there's nothing really to lose, other than ego, in facing the feared inevitable. I'm not a programmer by trade, so it's not a direct insult to my competency. It's also not something I'm paid to do, and the act of creating this means an opportunity for constructive feedback from which to learn, if someone is willing to take the time to explain why something should have been done a different way.
These are the latest rounds of observation for my little project. I don't have a release date yet; my goal is to keep inching towards completion, which I am. Progress is progress, even if it's just in hour-chunks each week...
My own project, without getting into too many details since it's still embarrassingly amateur, is at a point where it seems to work "well enough" that it could serve as a simple demo. In the process of getting to this point I've made a few observations.
One, if you lack a good spec that maps specifically what features you want and how to use the application, you'll probably end up with cruft that does little more than takes up space. I made a pass at the source code to clear up some test calls and experimental stuff I threw in. One function was called once, and it carried a "test" notation. The function itself described what the function did but not why I put it in there. I think I at one time had an idea for the program to do something that this function would help with later, but at a point after the application took shape the scope of my project kind of changed.
Two, it's probably a good thing to periodically go through and clean up extra crap from the code. It's a lot less to search through and trace when something goes wrong or wonky.
Three, just because it compiles and seems to work doesn't mean the impostor syndrome will fade. I suppose that by acknowledging there's a possibility I am working under the shade of the impostor syndrome umbrella I'm not actually experiencing it, but that's a rabbit hole I don't feel like pursuing here. Lacking experience in the field contributes to the feeling of frustration and inadequacy; I don't know if a problem I run into is similar to something my colleagues would run into or if it's a rookie mistake. In one example I was running a rough bit of approximation math to figure out if a couple of controls overlapped, then felt hugely stupid for not thinking sooner that there would be a method built into the libraries for detecting the collision of the controls.
Google told me there was a collision detection method. I experimented a bit, but it seemed that it didn't work properly, or at least not in the way I expected. Narrowed down Googling suggested that these particular controls don't do collision detection; the proposed solution was the math route I was first pursuing.
Four, keep track of your development work, for a reminder if nothing else. Maybe more experienced developers don't need to bother doing this, but I find a personal journal to be handy sometimes. Nothing too elaborate. I just go back to remind myself of what I've done and how long it's taken to do it. Days blur together over time...the list of what I've managed to do in my sessions helps jog my memory and keep me from thinking I've accomplished nothing.
Five, keep track of your issues and goals. I was keeping paper notes before, but now I've opened a Trello board to create notes. Think of a feature to add? Create a card for it. Find a weird behavior? Create a card for it. I simply created lists to classify the issues...features, quirks, etc...and then shuffle the cards around as tasks are completed.
Trello is a list of lists, not really meant for user as a bug or feature tracker. But for my needs keeping track of issues as a series of handy lists is adequate. Maybe professionals have a better way of achieving these type of tasks, but I'm not working on a team nor working on a giant project. Do other programmers or teams use Trello for tracking software projects, or particular aspects of software development?
Six, there's more than one way to achieve a goal, but figuring the "right" way takes experience. At least, I think it does. I've read conflicting accounts of the "right" way to design the logic flow of a program; what should go into a function? How big should functions be? What should go into a separate file? Are there certain practices that work for a single- or small-team program that others would cringe if they saw it?
I recently read of an analysis of firmware used in an embedded controller that apparently ignores most of what would be considered "best practices," yet was...is...in use in a number of cars on the road. Presumably this code was written by people who are experienced programmers.
I've also read more than one case of coders eviscerating other coders work as inept or incompetent.
Is there a set formula or best practice for doing this? Or is it a learning through experience thing? Or in the end, does it just not matter as long as it all compiles and runs as intended?
Seven, programming can be difficult, and there's a learning curve to even using the tools that makes this more difficult. A very long time ago I created elementary programs by typing line by line into the memory of a Commodore 128, TRS-80 or Apple II. Later on I typed lines into a text editor, creating a single document from which an application would run (or compile, depending on the language.)
Even later on I could create a large text file that contained most of what I needed to compile into an application. Certain libraries or adjunct modules could be created in extra text files and compiled in with includes.
Today's version of Visual Studio is simply overwhelming to the uninitiated. More confusing is the fact that you don't even need to touch three quarters of the features and still create a working elementary program.
Discoverability is remarkably difficult, despite great features like Intellisense to auto-complete what I'm typing, I was unsure how best to figure out if these particular controls were overlapping; there's a method that seemed that it should work, but it doesn't. Despite an integrated environment that looks like it was ripped from a futuristic movie, I am still scouring the Internet for example code to figure out how to do what I'm looking to do.
Some parts of the code seem to work as if by magic; it's most likely a deficiency in experience, but sometimes tracing what is happening at what point in the execution of the application is difficult, despite everything in the IDE meant to help with debugging and tracing the execution path.
That's what I miss from using simpler tools and text editors. There seemed to be less magic involved in debugging as well as understanding how and why things worked in the process; I know that if I were involved day in and day out with Visual Studio things would make more sense, but as someone using it sporadically, it's still amazingly complex. I liken it to being able to make a car start, go, and stop, but having no idea how to use the A/C, stereo or even the cup holders.
Once I can finish the personal project, I'm moving into playing with Rails. Ah...a text editor to create a usable application? Let's see if that suits me better.
Eight, your small project is not immune to feature creep. This may be related to not having a complete spec created beforehand, but as my own project evolved, I find myself not wanting to show it until it has all the side features implemented. Most of them would probably not even be visible to the end user, unless something goes wrong.
It could be argued that some of the items I'm looking to implement aren't really features, but rather bug fixes. "This works well enough, unless the network doesn't exist...it needs to check and throw an error to the user if that's the case" is a little different from adding a way for the user to control what color a font is. In this context I simply have added conditions to check for before being willing to release this as a 1.0 version for the scrutiny of others.
Nine, I'll never be comfortable showing my work. Anyone can come up with excuses for why something sucks. Working around people who are very good at something, and not working in a learning or apprenticeship capacity, makes the prospect of showing something to others quite intimidating. Primarily because I have no illusion that this is great work.
Then again, knowing that this could be ripped apart with questions of, "Why on Earth did you do this?" and, "How did this compile?" as a feared inevitability also implies that there's nothing really to lose, other than ego, in facing the feared inevitable. I'm not a programmer by trade, so it's not a direct insult to my competency. It's also not something I'm paid to do, and the act of creating this means an opportunity for constructive feedback from which to learn, if someone is willing to take the time to explain why something should have been done a different way.
These are the latest rounds of observation for my little project. I don't have a release date yet; my goal is to keep inching towards completion, which I am. Progress is progress, even if it's just in hour-chunks each week...
Wednesday, November 6, 2013
Drones over Manhattan
Word choice carries certain implications. That's why our politicians have small armies of specialists...colloquially referred to as spin doctors (although before having someone point it out to me, I'm aware that the crafting of a particular phrase to frame a subject is only one aspect to a spin doctor's craft)...to offer a "proper" framing of information being delivered to the public. It has become quite a science, using particular words to play on people's emotions and thus manipulate them to support a particular point of view.
Perhaps the most blatant illustration of word choice used to play on emotions can be found in the ever-popular abortion issue. "Pro-Choice" and "Pro-Life" are certainly better labels than "Anti-Choice," or "Pro-Murder," right?
Spin doctors want candidates and practitioners to stay on message, and reiterate the short sound bites ad infinitum. Perhaps this works with issues that people don't become invested in personally. Perhaps it works on a spectrum such that the use of these phrases still affects people despite being aware of the verbal trickery and manipulation. Or perhaps the simplest explanation is that too many people unskilled in the actual "art" of spin doctoring become armchair spin doctors, stealing dumbed-down terminology from headlines the way high-schoolers copy and paste online articles to pass off as their own work.
The reason I bring this up is the headline, "New Video: Drone Crash Lands in Manhattan" crossing my news ticker. A drone! In one of the busiest areas of New York City! A DRONE!
Drones have been in the news recently for being used to kill who-knows-how-many civilians in countries somewhere over in Theyhaveouroilistan. These are Predator drones...remotely controlled bomb delivery and video surveillance systems piloted by military personnel miles away from the target area. This is the image that is being ingrained into us both by the "See our nifty toys" division of the military and the humanitarian agencies denouncing their use.
But in New York! Our own soil? The images conjured up when I saw this headline were flashbacks to episodes of Dark Angel, where the police used small autonomous devices that buzzed about the city gathering video footage for surveillance purposes. And there have been headlines hinting that police departments would be interested in testing such technology.
Surely this would be interesting news! So I clicked on the headline.
What I got was more of a lesson in sensationalism.
As it turns out, someone took a radio controlled quadcopter with a video camera and started flying it around Midtown. After taking off from his balcony and bumping into several buildings, the copter finally took enough damage (or ran out of charge) and fell to the sidewalk, "narrowly missing" someone who called the police.
This was their drone. A toy copter with a camera onboard.
I suppose it can be, by strict definition, considered a drone. It flew. It had a camera recording its flight. Maybe it even transmitted the visual information back to a receiver at this guy's apartment, and he was flying it by the camera and not by giggling and randomly moving the control stick.
But this also puts a child's toy on par with a million dollar piece of military hardware. Or at least in the eyes of the news reporters, who in turn sensationalize it for the public, making it seem as if terrorists or big brother are hovering outside of your window to watch you undress.
There was a time when teachers and school librarians scoff at the idea of students using online resources for research; "Anyone can put something on the Internet without making sure it's true!"
They begrudgingly started allowing Internet citations as more news agencies started posting material on websites. The next thing I remember being on the banned list were Wikipedia articles, because "Anyone can post information to Wikipedia!"
Now they begrudgingly allow Wikipedia to be used as a "starting point" for research papers.
There's a taste of irony when I see how the accepted, vetted, trusted sources...such as this news channel...in a bid for ratings, and in an attempt to beat all the non-vetted, untrusted Internet sources, doesn't hesitate to skew news headlines beyond the point of being misleading.
Ratings through word choice. Manipulation of public perception through word choice.
Maybe teachers should re-emphasize the importance of the thesaurus when reviewing the Common Cores.
Perhaps the most blatant illustration of word choice used to play on emotions can be found in the ever-popular abortion issue. "Pro-Choice" and "Pro-Life" are certainly better labels than "Anti-Choice," or "Pro-Murder," right?
Spin doctors want candidates and practitioners to stay on message, and reiterate the short sound bites ad infinitum. Perhaps this works with issues that people don't become invested in personally. Perhaps it works on a spectrum such that the use of these phrases still affects people despite being aware of the verbal trickery and manipulation. Or perhaps the simplest explanation is that too many people unskilled in the actual "art" of spin doctoring become armchair spin doctors, stealing dumbed-down terminology from headlines the way high-schoolers copy and paste online articles to pass off as their own work.
The reason I bring this up is the headline, "New Video: Drone Crash Lands in Manhattan" crossing my news ticker. A drone! In one of the busiest areas of New York City! A DRONE!
Drones have been in the news recently for being used to kill who-knows-how-many civilians in countries somewhere over in Theyhaveouroilistan. These are Predator drones...remotely controlled bomb delivery and video surveillance systems piloted by military personnel miles away from the target area. This is the image that is being ingrained into us both by the "See our nifty toys" division of the military and the humanitarian agencies denouncing their use.
But in New York! Our own soil? The images conjured up when I saw this headline were flashbacks to episodes of Dark Angel, where the police used small autonomous devices that buzzed about the city gathering video footage for surveillance purposes. And there have been headlines hinting that police departments would be interested in testing such technology.
Surely this would be interesting news! So I clicked on the headline.
What I got was more of a lesson in sensationalism.
As it turns out, someone took a radio controlled quadcopter with a video camera and started flying it around Midtown. After taking off from his balcony and bumping into several buildings, the copter finally took enough damage (or ran out of charge) and fell to the sidewalk, "narrowly missing" someone who called the police.
This was their drone. A toy copter with a camera onboard.
I suppose it can be, by strict definition, considered a drone. It flew. It had a camera recording its flight. Maybe it even transmitted the visual information back to a receiver at this guy's apartment, and he was flying it by the camera and not by giggling and randomly moving the control stick.
But this also puts a child's toy on par with a million dollar piece of military hardware. Or at least in the eyes of the news reporters, who in turn sensationalize it for the public, making it seem as if terrorists or big brother are hovering outside of your window to watch you undress.
There was a time when teachers and school librarians scoff at the idea of students using online resources for research; "Anyone can put something on the Internet without making sure it's true!"
They begrudgingly started allowing Internet citations as more news agencies started posting material on websites. The next thing I remember being on the banned list were Wikipedia articles, because "Anyone can post information to Wikipedia!"
Now they begrudgingly allow Wikipedia to be used as a "starting point" for research papers.
There's a taste of irony when I see how the accepted, vetted, trusted sources...such as this news channel...in a bid for ratings, and in an attempt to beat all the non-vetted, untrusted Internet sources, doesn't hesitate to skew news headlines beyond the point of being misleading.
Ratings through word choice. Manipulation of public perception through word choice.
Maybe teachers should re-emphasize the importance of the thesaurus when reviewing the Common Cores.
Subscribe to:
Posts (Atom)