Skip to main content

8 posts tagged with "development"

View All Tags

· 3 min read
zach wick

If you are selling a tool to a profit center, you are enabling your customer to do more faster. To be somewhat reductive, this means that you are selling a time saving measure.

Selling to a profit center

If you are selling a tool to a cost center, then you are enabling your customer to do the same amount of work cheaper.

Selling to a cost center

Think about your customer's product as a vector in a 2-d space where the axes are "time savings" and "cost savings". The magnitude of your customer's product vector is the important part. A larger magnitude means "more success", while a smaller magnitude means "less success", however your customer defines "success". Your product is also a vector in that same space.

The magnitude of the sum of your customer's product vector and your product's vector is the success that your customer can have when they're using your product. This is obviously what you should try to optimize. This is shown as the dotted line in the above illustrations.

It doesn't matter which axis is increasing more because of the influence of your product's vector. This is because the same product can be sold as a time saving tool to a profit center, or as a cost saving measure to a cost center. The trick is to figure out if your target user is a cost center or profit center and then frame your product as the appropriate type of tool.

Selling tools to developers is usually selling to a profit center. This means that if you are selling to developers, the entire experience around your product must be quick and snappy.

Your product should be installable in either a single command or two, or be available to download, install, and use right away - without needing to have a back-and-forth with a salesperson.

Your documentation should have a quickstart guide that enables your users to get up and running in a matter of minutes.

The quicker that a user can be successful with your product, the larger the time savings offered by your product will seem. A larger (perceived or actual) time savings makes selling your product to a profit center that much easier, because you are already showing evidence of your product's value proposition.

When you are building a product, you are actually building two correlated products; your actual product, and the experience around using your product. Clearly, it is vitally important that your product actually solves your customer's issue. However, your product must also solve that issue in a way that is better along either the "cost savings" or "time savings" axis than any other solution. It is your job to figure out which of those axes is more important to your customer, based on whether they are a cost center or profit center, and market your product accordingly.

· 7 min read
zach wick

I am an elected legislator in Carey, Ohio. This post has been adapted from a proposal that I brought to the Village Council in May 2020, which was ultimately not pursued due to financial considerations brought on by the COVID-19 pandemic. I am sharing bits of it here to serve as a reference point for a non-government backed version of a similarly shaped program.

Where the following proposal reads "the Village of Carey", one can (and should) substitute "a community". A government is not required for people to better the community in which they find themselves, and therefore such a program need not be supported by a government but instead can be supported by any community that so chooses.

Proposal

The Village of Carey should create, fund, and administer a program in which any person with an income tax obligation to the Village of Carey can have their costs for tuition and material covered if they enroll in, and successfully complete, a technical training course.

Background

For period from 2010 - 2018, Wyandot county had the highest recorded job growth rate in the state at 23% [1]. Counteracting this job growth is a steady decline in county population, -3%, for that same period [2]. There is also a projected -11% population growth rate for the period of 2010 - 2040 [2]. Given these trends, there is an increased focus on helping the underemployed and removing barriers to entering the local workforce. One way in which the Village of Carey can have a measurable impact on both of these focus areas is by encouraging remote employees of technology companies to live and work in the village.

Financials

As of the 2000 census, the median household income in the Village of Carey was $33116/year[3]. We know that the current median household income in the Village of Carey is less than or equal to $45000/year because of the loan terms that the village qualified for to finance the construction of the wastewater treatment plant. According to Indeed, the average salary for an entry level software engineer in the state of Ohio is $67235[4].

This means, that on average, the expected difference in salary for a citizen who completes this program and is placed in an entry level software engineering role would be an increase of $34119/year. This implies that the Village of Carey would expect to see an increase of on average $511/year in income tax revenue from that citizen due to having a 1.5% income tax.

The sticker price for one year of access to all courses on OneMonth.com is $299. This price is presumably open to negotiation, and OneMonth has already shown some appetite for offering a discount in casual conversations that the author has had with their team.

This has the implication that the Village would break even on this program in a single year if only half of the program’s participants were successful.

There may end up being other ancillary costs, such as office supplies and sundry supplies that the program participants may need for their workshops. However, the assumption is that workshop space and many of these other items will likely be donated by the community. The Village already has funds marked for economic development efforts, and this pro- gram would seemingly qualify to be covered by that account.

Course Selection

Since 2014, JavaScript has been the top language used by open-source repositories hosted on Github[5]. JavaScript is the primary language for building web applications, and is increasingly being used to construct mobile applications. By learning JavaScript, program participants will implicitly learn about HTML and CSS as well.

Program Structure

The following is a first attempt at structuring the program in a self-sustaining way.

Informational Session

A clear description of the program should be placed in the local newspapers, and an informal informational session should be held for interested parties to ask questions and receive an overview of the program.

Application Period

For a period of two weeks, any interested eligible person may apply to be in the pilot. The applications are to be turned in to the Village Administrator, who will review applications with the program’s operating committee.

Program Selection

Participants in the pilot program will be selected based on the following criteria:

  • Technical aptitude
  • Personal mission statement
  • Availability compatible with other program participants

Registration Workshop

A workshop session in which program participants for the pilot program each individually register for the selected course(s) on OneMonth.com.

(Bi-)Weekly Workshop Sessions

The weekly or bi-weekly workshops in which program participants get in-person (either live or via video chat) help from mentors and complete coursework from the selected course(s). These workshops will be held at a location with sufficient network bandwidth, grounded electrical sockets, and seating for the program participants.

Reimbursement

After the successful completion of a course, program participants will be reimbursed any contracted fees. This would include any Ohio TechCred reimbursements for course mate- rials in addition to any reimbursement from the Village of Carey.

Job Search Assistance

After the successful completion of a course, program participants will receive coaching on resume construction, and will work with the selected recruiting firm to find and apply to relevant job openings.

Successful program participants will work with the selected recruiting firm to help find an apply to relevant job openings. The positions that applicants apply for should ensure that program participants continue to have an income tax obligation to the Village of Carey.

Post-Completion Mentoring

While completing their job search, program participants will be expected to serve as a mentor for at least 3 (bi-)weekly workshops for the following program participant cohort.

In addition to past program participants fulfilling their mentorship obligations, community members with appropriate skill and interest may serve as volunteer mentors and help unblock program participants during (bi-)weekly workshops.

Post-Program Projects

After the successful completion of a course, program participants may desire to continue practicing their skills, build out a professional portfolio, and help their community. They can do this by working in small teams on various civic technology projects such as the brainstormed list that follows.

Carey 311 app

A way for citizens to report issues such as potholes, downed limbs, dangerous sidewalks, etc.

Tree mapping app

A way for citizens to catalogue and adopt trees in the right-of-ways and village parks.

Village data portal

A https://data.gov type web portal for all datasets that the village maintains or generates.

Village document portal

A web portal for easily searching and retrieving any public record from the Village of Carey.

Example timeline

2020-05-04 Initial presentation to Council.\ 2020-05-18 Discussion of updated proposal addressing any initial feedback.\ 2020-05-19 Begin formalizing any pricing discounts with OneMonth.\ 2020-05-26 Public informational session\ 2020-05-26 Application period opens\ 2020-06-09 Application period closes\ 2020-06-12 Accepted Session #1 applicants are notified\ 2020-06-15 Session #1 begins with a Registration Workshop\ 2020-06-15 Session #1 weekly/bi-weekly workshop sessions begin meeting\ 2020-06-29 Session #2 application period opens\ 2020-07-13 Session #1 ends\ 2020-07-13 Session #2 application period closes\ 2020-07-17 Accepted Session #2 applicants are notified\ 2020-07-20 Session #2 begins

References

[1] A. DeMartini, “Rural job growth rate in ohio surpasses columbus rate,” May 2020\ [2] A. Huston, “County shows largest job growth in state,” The Progressor Times, p. 3A, Nov 2019.\ [3] Assorted, “Carey ohio wikipedia,” May 2020.\ [4] Indeed, “Entry level software engineer,” May 2020.\ [5] Github Inc., [“]The state of the octoverse,”](https://octoverse.github.com/#top-languages) September 2019.

· 8 min read
zach wick

The Ohio Dept of Health is doing contact tracing as part of their response to COVID-19. When a person tests positive for COVID-19, they provide their local health department with the name and phone number of any recent close contacts. Those close contacts are then contacted by the local health department and are asked to self-quarantine for 14 days and take their temperature twice a day. These close contacts are also asked to self-report their temperatures and any other COVID-19 symptoms that they may have either over the phone to a health department employee or via a web app.

If a person chooses to report their temperature and other symptoms via the web app, they are sent a URL daily at 1600 and then again every 30 minutes until they visit the URL and report their information. These URLs look like the following:

https://octs.odh.ohio.gov/symptom-tracker?p=T0RILTI5ODI1&d=MDcvMTMvMjAyMA%3D%3D&language=en

This was the URL that I was sent one day, as I was a close contact of a person who had tested positive for COVID-19 and I did self-quarantine for 14 days.

Breaking down the URL

Note that the query string of these URLs contains three fields; p, d, and language.

It's clear that language refers to the language that the requested page should be presented in. This can easily be tested by substituting es for en and noting that the page is now presented in Spanish. Further testing revealed that the self-reporting web app is only available in English and Spanish.

After receiving one of these URLs via SMS every day, it was apparent that the value of the p field didn't change, while the value of the d field did change each day. That suggested that the p field is the "patient id", while the d field is the date for which the user is self-reporting their temperatures and symptoms.

Noting that '%3D%3D' is the string '==' being URL encoded, suggests that the value of the d field is simply the date in the format 'MM/DD/YYY' base64 encoded (because base64 encoded strings can be padded with '=' or '==' at the end). A simple test of this showed that when I base64 encoded the string '07/18/2020', then URL encoded the result, and then supplied that value as the value of the d field, I could submit information for myself for any future date that was before the last day of my 14 day quarantine.

In my particular case, that meant that simply by changing the URL, I could submit temperature meausurements and other symptoms for myself for any date between 2020-07-07 and 2020-07-21. If I attempted to load the page for 2020-07-22, I was shown a message that I no longer need to complete the survey.

This ability to submit information for a day other than the current day probably isn't strictly a security bug, but it is somewhat poorly designed and the server-side code clearly doesn't do any validation on the reporting date since I was able to submit made up results for myself for 2020-07-18.

The more serious issue is the p field, which likely corresponds to a "patient id" or something of that ilk. Because the d field was base64 encoded, it would make sense that the p field is also something that has been base64 encoded. Note that the value of the p field does not have '%3D%3D' at the end. This is because base64 encoded only may be padded with '=' or '==' at the end, but a string doesn't strictly need to end with those characters. So, by taking the value of the p field and attempting to decode it as base64 yields (in my case) the string "ODH-29825". This seems like it is in fact some kind of unique identifier for myself in ODH's database.

So what can you do

Now that we understand what these fields are all used for, it is trivial to create query string parameters in reverse. For example one could attempt to use today's date (07/27/2020) and the patient id of "ODH-00000" to craft a URL like:

https://octs.odh.ohio.gov/symptom-tracker?p=T0RILTAwMDAw&d=MDcvMjcvMjAyMA==&language=en

and attempt to load the page for that patient. This page will only load if you've both selected a valid ODH patient id, and that the date that you've used for the value of the d parameter is within the 14 day quarantine period for that ODH patient.

Originally, the patient's first and last name were displayed on this page. This means that one can simply enumerate patient IDs from ODH-00000 to ODH-9999999 (or some large upper bound), and when an actual person's name appears on the site, you then know that the person listed has been a close contact of a known positive COVID-19 case and is currently self-quarantining and monitoring. Once a person has been found, you must simply try up to the next 14 days as the value of the d field to determine when they were in contact with a positive COVID-19 case.

Reporting Timeline

2020-07-07

I begin receiving these text messages from ODH.

2020-07-16

I resolve to report this to the State of Ohio, as I feel it is a big enough issue to warrant doing so.

2020-07-16 @ 1530

After searching around on https://ohio.gov, I came across https://infosec.ohio.gov/Incidents/Reportanincident/StateGovernmentSites.aspx.

2020-07-16 @ 1540

I called the listed number (614-644-8660) and was asked to email the details of what I was reporting to csc@ohio.gov. I was also asked if I was a public employee, to which I responded that while I'm not an employee of the State of Ohio, I am an elected legislator in my village.

2020-07-16 @ 1549

I sent in an email indicating that I had been asked to email this address and asked how they would like me to securely get the details of the vulnerability to their team.

2020-07-16 @ 1551

I received automated emails that an incident had been created and had been assigned to someone.

2020-07-16 @ 1658

I received an email at the email address corresponding to my position as an elected legislator, asking to confirm that this email belongs to the same individual who sent the email to the state at 1540.

2020-07-16 @ 1702

I responded in the affirmative to the above email.

2020-07-16 @ 2114

I tweet the SHA256 sum of this file.

2020-07-17 @ 1559

I received an email and then shortly after a phone call from a representative from the state and I provided the responding team a password-protected ZIP archive as an email attachment and provided the password to them over the phone. This was the alternative I proposed since the team didn't seem interested in providing a GPG/PGP key that I could use to encrypt my message to them.

2020-07-23 @ 0929

I received an automated email that the incident had been resolved. This email contained the following note:

ODH worked with the vendor to strip the last names from the data. It was determined that the only data that a bad actor could see would be the name of the citizen that is being traced. ODH has excepted the fix of just removing the last name because of the volume of citizens that are being tracked.

Alternative fixes

I understand the tradeoffs that go into building software, but accepting a fix of only no longer displaying the last name of the person seems less than ideal. Anyone who can construct these URLs can still submit made-up information for any person currently under a 14-day quarantine that has elected to self-report their temperatures and symptoms via the ODH's web app.

A few alternative, and somewhat more robust, fixes come to mind here. One option is to to create a single-use UUID that maps to a patient and date tuple in the ODH system backend, and then provide users with URLs that look like:

https://octs.odh.ohio.gov/symptom-tracker?t=<SOME_UUID_HERE>

These UUIDs should be created new for each patient for each day, and the server-side code that the self-reporting form submits to should ensure that the date for that UUID matches the current date and that the user has not yet submitted information for that day. This would make it much more difficult to enumerate all the possible UUIDs and extract the names of people who may have been exposed to COVID-19 because a malicious actor would have a much smaller time window in which a constructed URL would be valid. This also adds at least some server-side data validation.

Another option would be to just turn off the ability for citizens to self-report this health information and instead have local health department employees contact each citizen under quarantine via phone each day to record their health information. This would be much more time-consuming, and would cost much more, but would have the added benefit of not being susceptible to false data being reported for a citizen by a malicious actor simply constructing a URL.

Closing

I have a repo, which contains a simple python script that demonstrates how to generate the URL for a given ODH patient id and date. It would be unethical to submit false data for a person other than yourself, but I feel it is important to show how trivial it is to construct these URLs.

· 2 min read
zach wick

Genie is a tool for assigning arbitrary tags to file paths, and then performing search operations on those tags.

Like any person, I organize my files in a standard way on the machines that I use regularly:

~/Users/
├── Desktop
├── Documents
│   ├── Personal
│   ├── Work
├── Downloads
├── Documents
├── Repos
│   ├── APL
│   ├── C
│   ├── CPP
│   ├── Go
│   ├── Guile
│   ├── JS
.
.
.

This has its advantages, such as knowing where to go when looking for some particular project. This structure falls over for projects with components in multiple languages however, such as a Swift API with a JS client. In that case, this kind of file structure relies on naming conventions to indicate that two directory trees at Repos/Swift/projectx-server and Repos/JS/projectx-client are part of the same project.

This is where genie comes in handy, because the Swift API's directory and the JS client's directory can be assigned the same tag of "projectX" and then use genie search projectX to see all of the filepaths that are associated with that project's tag.

The other times where genie is useful is remembering every few months where exactly in a large codebase some change needs to be made. This is where genie really shines in my day job where I seem to do many semi-regular drive-by pull requests in the same area of the monorepo and can tag some deeply nested file with a self-evident tag.

genie is essentially a very small command line tool written in Swift that wraps some sqlite queries, but it serves its purpose well.

You can read more about genie in the very nascent docs.

· 3 min read
zach wick

This set of steps is the result of a weekend poking at how to get Travis-CI and GitHub configured to provide a CI pipeline for Swift packages. It is mostly for future reference for when I next start another Swift project.

Steps

  1. Create a new directory and navigate to it
mkdir Hello
cd Hello
  1. Every package must have a manifest file called Package.swift in its root directory. You can create a minimal package named Hello using:
swift package init
  1. Build your library with:
swift build
  1. Run your tests with:
swift test
  1. Initialize your package as a git repo
git init
  1. Create a new repository in your GitHub account

  2. Commit your local changes

git add *
git add .gitignore
git commit -sm "Initial Commit"
  1. Add your GitHub repo as a remote repo
git remote add origin git@github.com:zachwick/SwiftHello.git
git push -u origin master
  1. Navigate to https://travis-ci.org/ and connect it to your GitHub account

  2. Activate your new repo in Travis-CI (https://travis-ci.org/zachwick/SwiftHello)

  3. Create .travis.yml and populate it as below:

if: tag IS blank
branches:
only:
- master
env:
global:
- SWIFT_BRANCH=swift-5.0.1-release
- SWIFT_VERSION=swift-5.0.1-RELEASE
- PACKAGE_VERSION=0.0.1
jobs:
include:
- stage: Linux test
os: linux
language: generic
dist: xenial
sudo: required
install:
- sudo apt-get install clang libcurl3 libcurl4-openssl-dev libpython2.7 libpython2.7-dev
libicu-dev libstdc++6
- curl https://swift.org/builds/$SWIFT_BRANCH/ubuntu1604/$SWIFT_VERSION/$SWIFT_VERSION-ubuntu16.04.tar.gz
> $SWIFT_VERSION-ubuntu16.04.tar.gz
- tar xzf $SWIFT_VERSION-ubuntu16.04.tar.gz
- export PATH="$(pwd)/$SWIFT_VERSION-ubuntu16.04/usr/bin:$PATH"
script:
- swift package update
- swift test
- stage: OSX test
os: osx
osx_image: xcode10.2
language: swift
sudo: required
install:
- wget https://swift.org/builds/$SWIFT_BRANCH/xcode/$SWIFT_VERSION/$SWIFT_VERSION-osx.pkg
- sudo installer -pkg $SWIFT_VERSION-osx.pkg -target /
- export PATH="/Library/Developer/Toolchains/$SWIFT_VERSION.xctoolchain/usr/bin:$PATH"
script:
- swift package update
- swift test
- stage: Set tag
script:
- git config --global user.email "builds@travis-ci.com"
- git config --global user.name "Travis CI"
- git tag $PACKAGE_VERSION
- git push --quiet https://$GH_TOKEN@github.com/zachwick/SwiftHello --tag > /dev/null 2>&1
  1. Commit .travis.yml and push to GitHub

  2. Create a Personal Access Token in your GitHub account

  3. Install the Travis CLI tools by either

brew install travis

or

gem install travis -v 1.8.9 --no-rdoc --no-ri
  1. Set up and configure the travis CLI tool
travis login --auto
travis branches

The first command authenticates your CLI tool with your Travis-CI account using your local GitHub credentials. The second command ensures that the travis CLI tool is using your Swift project as its current project.

  1. Use the Travis CLI to add your GitHub Personal Access Token as an encrypted environment variable
echo GH_TOKEN=<YOUR TOKEN> | travis encrypt --add
  1. Add a Travis-CI build status badge to your project's README file by following the steps at https://docs.travis-ci.com/user/status-images/.

Now, everytime that you want to push a new version of your Swift project if all of your tests pass, you simply need to bump the version defined as PACKAGE_VERSION, in .travis.yml and them commit and push your changes.

· 2 min read
zach wick

The Homebrew package manager uses external commands to extend its functionality. These are either shell scripts or ruby scripts that can be added on top of the existing brew infrastructure via the brew tap command.

When creating the license external command, I needed to add an external ruby dependency of the octokit gem to facilitate attempting to fetch a formula's licensing information from the Github API. It was obvious that I needed to add

    require 'octokit'

to my brew-license.rb script, but it wasn't obvious how I could trigger that gem being installed on a user's machine if it wasn't already present.

The solution that I settled on was to create a Gemfile where I defined my dependencies:

source 'https://rubygems.org'
gem 'octokit'

Then, at the beginning of my brew-license.rb script, which is what is executed when a user types brew license in their shell, I needed my ruby script to invoke bundler if the octokit gem wasn't installed locally on the user's machine. This can be accomplished with the following:

    REPO_ROOT = Pathname.new "#{File.dirname(__FILE__)}/.."
VENDOR_RUBY = "#{REPO_ROOT}/vendor/ruby".freeze
BUNDLER_SETUP = Pathname.new "#{VENDOR_RUBY}/bundler/setup.rb"
unless BUNDLER_SETUP.exist?
Homebrew.install_gem_setup_path! "bundler"

REPO_ROOT.cd do
safe_system "bundle", "install", "--standalone", "--path", "vendor/ruby"
end
end
require "rbconfig"
ENV["GEM_HOME"] = ENV["GEM_PATH"] = "#{VENDOR_RUBY}/#{RUBY_ENGINE}/#{RbConfig::CONFIG["ruby_version"]}"
Gem.clear_paths
Gem::Specification.reset
require_relative BUNDLER_SETUP

Now, the first time that a user executes brew license, this bit of code will ensure that the octokit gem is installed locally.

You can see this in action by installing and using brew license, and if you're interested in adding licensing information for your favorite Homebrew formulae, please do feel free to submit issues or PRs!

· 9 min read
zach wick

Upon reflection, my dream workplace culture would have following properties, listed below in no particular order:

  1. Employees are encouraged to use any system/config/setup that they want
  2. Nobody ever has set work hours
  3. Nobody ever has any set work place
  4. Every employee is salaried
  5. Any meeting lasting longer than 15 min must be scheduled at least a day in advance
  6. If a meeting follows the pub/sub model, it should be an email instead of a meeting
  7. All development happens in the open
  8. Every employee has 1 day per week to work on anything that they want
  9. Each employee must respond to at least one support case per week (if that many cases exist)
  10. Every employee has the responsibility to be at least somewhat familiar with the company strategy and product roadmap(s) at at least a high level
  11. Every employee has the right to be a part of the decision making process perternaing to the company/product vision/roadmap(s)
  12. The only information that an employee is not privy to is their coworkers’ salaries

Explanations

Employees are encouraged to use any system/config/setup that they want

Here I mean that I want every employee to constantly be tweaking their work environment to make them "better" at their job – whatever that job may be. For developers, this culture property probably means that they are encouraged to use any text editor, web browser, development tool, or operating system that they so choose. With regard to non-developer employees, this point probably means that they are encouraged to use whatever email client, software, phone, etc. is going to make them better.

The main point is that I want all employees to be enabled to constantly evaluate how they are doing their job and try to optimize their performance for whatever metric makes sense based on their role.

Nobody ever has set work hours

What I mean here is that I recognize that people have different schedules and different times at which they are most productive. Personally, I am most productive from 05:00 – 11:00 and then from about 18:00 – 20:30; For other people, this schedule is when they are least productive. The main point is that I want employees to be able to work when they are going to be most productive and not feel like they must work from 09:00 to 17:00.

Nobody ever has any set work place

The point here is very similar to point 2 above – I recognize that there are times when working in solitude at home is going to make a person the most productive and there are times when working from a coffee shop is going to be the most beneficial. There are also times when being in an office next to your coworker is going to be the best place to really crank out some work. I want the company to recognize and encourage all employees to be proactive about choosing the place that is going to be the best for them to work in. Personally, I happen to love hacking on code sitting outside on the grass in the sunshine.

Every employee is salaried

I don’t ever want an employee to think, "it is 17:00, time to head home and quit thinking about this problem." nor do I ever want any employee to work longer just to make more money while being less efficient. I think that salaries encourage people to be productive however they personally can. Also, with wanting point 2 where no employee has set working hours, paying an hourly wage would require massive amounts of paperwork and process.

Any meeting lasting longer than 15 minutes must be scheduled at least a day in advance

This is partly a corollary of points 2 and 3, and partly a standalone point. With every employee working whenever and wherever they are going to be the most productive, meetings that require synchronicity must be scheduled in advance. Additionally, some people (myself included) like to run through a mental plan of how their next few days are going to go, and being knowledgeable of all the requirements on my time is essential for that process.

If a meeting follows the pub/sub model, it should be an email instead of a meeting

There is nothing worse than a meeting in which one person is just pumping information out to the rest of the attendees with no needed response. I call this the pub/sub meeting model – as in the publishing/subscribing model for syndication and event handling. These kinds of meeting are a huge drag on people’s attention, energy, focus, everything, and are better suited by the speaker sending the information in an email. Meetings are only necessary when discussion is required.

All development happens in the open

This point may only apply to developers and their ilk, but I think that it is very important that developers be allowed to see, and contribute to any code that the company uses. This allows (potentially) more eyes to review code, and it allows developers to take a brief diversion and work on something possibly entirely different than they normally would. The advantages to this are at least twofold; This helps prevent developers getting bored with their work, and it encourages developers to always be learning new things instead of having their skill set stagnate.

Every employee has 1 day per week to work on anything that they want

A corollary to point 7, is that every employee has 1 day per week in which they are encouraged to work on anything that they so desire. This side project could be related to their job, or it could be practicing basket weaving. The rationale for this point is the same as the rationale for point 7 – preventing boredom and skillset stagnation.

Each employee must respond to at least one support case per week (if that many cases exist)

I think that is important for every employee to realize what the end users’ pain points are in the products. Having this information readily available makes it easier for employees to make executive decisions and make the products more user-friendly. It is also always extremely eye-opening to see how exactly customers are using what you made. Often times it seems like customers always find a cool way to abuse the product into working in some unintended manner.

Every employee has the responsibility to be at least somewhat familiar with the company strategy and product roadmap(s) at at least a high level

This builds on point 9 above, if every employee is empowered to make the products better, they should know where the company has decided to make the product end up.

Every employee has the right to be a part of the decision making process pertaining to the company/product vision/roadmap(s)

Since employees are required to have a rough idea of the company vision and product roadmap(s), they should also have input into what those visions are. Employees also have firsthand knowledge of how customers are using, and how customers are abusing, the products because they are seeing support cases come in. This knowledge is information that is required in the product planning process.

The only information that an employee is not privy to is their coworkers’ salaries

Based on all the points above, employees are going to have a high degree of knowledge about the company. Employees are also going to be used to being able to improve any aspect of the company, which has a prerequisite of knowledge. That knowledge should be made available to all employees. It would be incredibly heartening to see real data on the company when things are going well and really eye-opening to see that same data in a downturn. That being so, the only information that an employee should be able to see is what their coworkers’ salaries are. While this information would be interesting, knowledge of it would create a heirarchy which might work to stop employees from being as proactive as they otherwise would be. Employees may start thinking, “that person makes more than me, so their input must be more valuable” which is false thinking and will lead to demoralization.

Drawbacks

There are probably some definite drawbacks to these points. I can see that some people might not see the value in paying employees to spend one day a week working on something that may have no direct benefit to the company. I can see how in a very large organization, point 11 might result in a "too many chiefs, not enough indians" kind of situation. I can see that points 2 and 3, under which no employee has set hours or location to work in, might result in inefficiencies.

I think that the first drawback of paid side project time is sufficiently addressed in the rationale for that point above. As for how to avoid the "too many chiefs, not enough indians" situation, I am not sure of a solution. Assuredly, not every employee is going to want to be a part of every decision-making process, so maybe this situation is self-fixing. As for employees who have no set work hours or place becoming lazy and inefficient, I think that the proper solution is to be clear from the onset that any employee who routinely fails to meet the expectations of them will be let go. This has the effect of ensuring that employees who can keep themselves on task, and are proactive about self-improvement remain and thrive, while those that cannot are not a drain on company resources.

Conclusion

When I had my wife proofread this, she asked me "So are you unhappy with where you work?". The answer to that is a resounding "no!". I rather enjoy my current job and the culture there has a majority of my points in it. Would I quit this current awesome job to work somewhere that has all of my points as part of its culture? The answer is that I don’t know, and it would require some thinking. When I have my own company someday, or am a very early employee at a company again, am I going to try and get these points to be part of the culture there? Of course I am going to.

I am of the firm belief that while I may get paid in exchange for slinging code around, it is always my responsibility to improve how I and my coworkers do that slinging.

· 2 min read
zach wick

Last weekend I had the privilege of volunteering, along with about 60 other people, at Ann Arbor Give Camp. Essentially, a Give Camp is a weekend hackathon where non-profits with a technical need (website, analytics tool, donation tracking, etc.) that they do not have the budget for, get volunteer developers to fill that need with some awesomely cool project. The project that I worked on was updating the design of the website and implementing a membership directory for a contractor's association.

Give Camp is different from any other hackathon that I have ever been too. First, you aren’t building a project just because it has cool technical merit or because it might be a viable business. You are building some project because some do-good organization needs it to do more/better good works. Also, Give Camp is different in that you are building a technical project for a group that (probably) isn’t very technically savvy. This fact tested both my patience when teaching our non-profit representative how to use the new system, and my design skills. I was challenged to come up with simpler wording on administration options, a more intuitive layout, and a streamlined workflow for the most common use case.

Give Camp also got me out of my personal tech bubble. I was unaware that there where people who self-identified as “.NET Developers” – Based on the make-up of my group, they are orders of magnitude more plentiful than I thought! That same revelation also made me see that a person could in fact do serious development on a Windows machine. I guess that I knew that these things existed, I just never encounter them in my days as a web developer/embedded systems developer.

My weekend at Give Camp was an amazing experience. It felt good to do good, and I got out of my tech bubble. I would recommend that anybody in the Ann Arbor area volunteer at next year's Give Camp, and for those elsewhere, I would absolutely advocate that you look for a Give Camp near you.