Skip to main content

· 8 min read
zach wick

Patterns are always obvious in retrospect. Upon reflection, the story arcs of my life are typically around ten years long.

From 2010 to 2011, I attended math lectures at University of Michigan's campus. I wasn't enrolled as a student there, I just lived nearby and worked odd hours as an iOS contract developer so after figuring out when and where classes that sounded interesting would meet, I would purchase the textbook from the campus bookstore and then just start showing up and not turning in, but completing all the work. I would attend sporadically in the fashion of a stereotypical perennial slacker student and try to withdraw from the class unnoticed all the while pursuing the topic on my own. I became reacquainted with numeric analysis, discrete math, and algebraic topology from my undergrad studies in this manner.

From 2014 to 2016, I was regularly traveling to New York City for work. On one particular trip, I had three in-person pitches to venture capitalists in as many days. After those pitches, I realized that I didn't really understand how exactly venture capital funding worked financially. Since I was in NYC, I went to the NYU bookstore and bought a copy of Venture Capital and the Finance of Innovation as well as a copy of Patent Law and Policy: Cases and Materials , the latter of whose title caught my eye and a quick perusal had me hooked. I always suspected that my future endeavors would have a legal aspect, and at the time, I dreamed about my startup exiting and practicing IP law for software businesses as a retirement hobby. This IP law textbook was purchased in the guise of being a ready source of semi-productive daydream reading as well as being a physical reminder to myself of my legal aspirations.

Buying a textbook from a campus retail bookstore when you have no affiliation with the institution usually requires a non-zero amount of social engineering. The conversation usually starts with the cashier asking to scan your school id. I've always had success by openly responding with "Oh, I'm not a student here, I just wanted to purchase this book for my own purposes." There's usually a bit of a pause, and then the cashier either shrugs and rings up my purchase, or they flag over a manager to get them to ring it up. In the worst case, searching online by the ISBN almost always yields an opportunity to purchase a given work.

Accordingly, in NYC in 2015, after a brief conversation with the cashier at the NYU bookstore, I had my books.

From 2016 to 2021, I worked at a private fintech unicorn and helped scale and create teams that worked on problems at the intersection of developer experience as a product, technical education, and marketing. This is too brief of a description for what was the most personally rewarding work-for-hire I have ever engaged in, but it is sufficient for this context.

On February 2, 2021 I published the inaugural edition of Read Law.This was the kickoff of an ambitious project to acquire an autodidactic legal education in a learning in public way, a la building in public . It was my intent to use learning in public to hold myself accountable seeing this project to fruition and as a way to build a network of peers to leverage to later advantages.

As many of my projects do, it began in earnest. On February 8, 2021 the second issue, on Judicial review and more went out. A week later, another issue went out, this one on the appellate flavor of judicial review . Around this time, I had an insight. I wanted two distinct outcomes from this nascent Read Law project that I was quickly realizing would be a multi-year affair at shortest. I first and foremost wanted to acquire a legal education for my own use and benefit. Secondarily, I wanted to be illustrative of what can be accomplished by sheer force of will. Perhaps some measure of vanity and ambition accounts for the first aim (and some of the second). The insight that has resulted in my re-association with Bowling Green State University after first attending fifteen years previously as a bona fide undergraduate student, was that in order to be the most illustrative example of what can be learned in an autodidactic fashion it would be necessary to be able to provide a framework or lens to my learning in public peers to evaluate what learning if any was occurring individually and en masse. I recalled a previous conversation with a coworker while working on constructing a developer education and certification platform for the aforementioned private fintech unicorn. We were discussing the difficulty of accurately assessing the effectiveness of a self-directed asynchronous technical education course at a scale of thousands of new users daily. To my everlasting obligation, my coworker said something that I remember as "what we really need is an instructional designer".

About two weeks later, my last day working for that company was February 19, 2021. In that same time period between my sophomoric realizations regarding Read Law and the difficulties in assessing learning in my recently departed day job, I had applied to Bowling Green State University for two graduate programs. A Master of Business Administration with a concentration in Accounting and a Master of Education in Instructional Design and Technology. Much like before, personal ambition fueled this former set of goals and less pecuniary motives fueled the latter.

I will complete the Accounting program in May 2023, and this retrospective is written as the culmination of the Instructional Design and Technology program.

With a clear idea to the idea of tailoring one's content to the medium, context, and audience that it is intended for, I applied to the IDT program with this Statement of Purpose , that begins with the paean

Reflecting back on it now, it is obvious that since my self-directed learning was just that—self directed—I meandered through the garden of knowledge picking fruits as I went, instead of systematically harvesting the fruit from one end to the other.

and ends with the appeal

My hope is that with the knowledge I gain during this degree program, I am able to make my own self-directed learnings more efficient and more comprehensive by knowing strategies and styles for teaching myself.

These same inward facing frameworks and lens for evaluation could also be applied externally, and thus investing time and effort in this IDT program seemed a prudent decision to make. A similar, but much more accountingly-terse calculus and statement of purpose were used to decide to invest in the Accounting program.

Now at the maturity date of one of these capital investments, it is time to take an accounting of the performance thus far.

Since February 2021, I have spent at least $724.56 USD on books for the furtherance of my progress in the IDT program. The receipts for two books cannot be located at this time. Unsurprisingly, some readings from each program have proven useful in the other, but this figure represents books that would not have been purchased were I not in the IDT program. Calculating a similar figure for tuition is left to the imagination, but it is a sufficiently large enough number to be noteworthy. BGSU's website notes that the estimated tuition for the e-campus IDT program option is $14,350. That seems a reasonable enough low-end estimate based on my recollections of paying regular tuition bills. For easier later mental math, let's approximate the lowest possible total monetary expense of the IDT program at BGSU at $15,000 USD for books and tuition. Participating in this program does have an opportunity cost associated with it since I could not possibly earn at my full potential while spending some time on educational efforts. The dollarization of that opportunity cost is likewise left to the imagination.

With that $15,000 USD figure in mind, my investment in the IDT program has been a resounding success. I now have the frameworks and lens by which to assess learning. I have increased confidence in my ability to use these lens to internally inspect personal learning as well as externally evaluate the learning of others. I have the specific knowledge by which to critically evaluate and improve my work, and the general understanding required to best encourage and enable others to learn from their work. Concretely, I have a clear vision for how to complete my Read Law project in a way that achieves my goals. By working at the intersection of my prior experience in marketing to technical audiences and my newfound instructional design toolkit, I have found my educational path back wandering the paths and sampling the wild fruits after a season of cultivating cash crops.

Have I experienced a change in my career aspirations since beginning the IDT program? No, I don't believe that I have. What has changed is the clarity by which I can see the abundance of ripe knowledge around me. Has the IDT program been a sound financial investment? Only time can definitively answer that question, but all indications point to the payoff period of this investment being quite short. Without ascribing values, it is a true statement that my confidence in the financial soundness of investing in earning a Masters in Education in Instructional Design and Technology has greatly increased in the 21 months since I resolved to determine that soundness experientially.

· 3 min read
zach wick

When working in the area of software developer enablement, my goal is to get the developer to be as effective as possible, as quickly as possible. For software businesses that serve technical audiences, product adoption is often predicated by educating potential customers on why your software product solves their issue. This is best done by providing a structured knowledge set that is efficiently consumable by the intended technical audience.

To do this requires having a complete definition of who the audience is, and then tailoring the educational experience to that audience. By aligning story based guides to common product usage patterns, the educational experience's goals and the learning developer's goals are aligned to enable more motivated learning (Ambrose, 2010, p. 73).

In order to provide a mental scaffolding of how the various product configuration options work in concert, all educational language must be consistent and particular in its use. For visual educational content, the particular words used and medium that they are presented in must be consistent. Typefaces and other visual cues can be used to impart special meanings to written content. This visual separation of language helps learners differentiate when they need to use declarative or procedural knowledge to complete their product use case. (Ambrose, 2010, p. 34).

In order to effectively reach learners of different styles, the educational experience must be broad in its means of publication. Some developers will be most efficiently served by having a comprehensive documentation set of every product configuration option. This publication strategy provides the informational organization for this knowledge to lower the cognitive burden of learning it (Ambrose, 2010). Other developers will be quickest to be successful by watching another developer use the product in a meaningful way. These developers are motivated by seeing real world applications of this technical product knowledge. Ambrose suggests that tying new knowledge to real world applications of that knowledge motivates some learners (2010, p. 83).

When product adoption is predicated on product education, then it is in a software companies' interest to ensure that its product education is as impactful as it can be. This educational experience can be maximized by motivating its technical audience to engage with the material and making the knowledge structured and easily consumed.

References

Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010). How learning works: Seven research-based principles for smart teaching. John Wiley & Sons.

· 7 min read
zach wick

This is an email based lecture on the first section of the twentieth edition of Constitutional Law by Noah R. Feldman and Kathleen M. Sullivan. It is meant to provide a framing augmentation to Section 1 of Feldman & Sullivan. The overall goal of this email series is provide a structured guided reading experience through typical legal coursework. The email content is tailored to an audience of legally curious learners who engage well with written content.

Synopsis

On the surface, this week's section was about the general concept of judicial review, specifically as performed by the Supreme Court of the United States. The concept of judicial review is straightforward enough, and its application in Marbury v. Madison (we'll refer to this case as Marbury for the rest of this writing) underpins the core of "separation of powers" into the legislative, judicial, and executive branches of government.

There is more to take away from the text however than just its content. This section, and its treatment of Marbury, serves as a good example of how to break the situation at hand down into distinct legal questions. The kind of thinking needed here, feels very similar to the thinking required to answer the typical tech interview question of "what happens when you type google.com into your browser and press enter?" In the interview question, a productive answer usually involves breaking down the given process into discrete steps. In the opinion for Marbury by John Marshall, the situation at hand was "Is Marbury's commission void because it was not delivered to him?"

Marshall broke down this situation into three distinct legal questions, which the court then answered:

  1. Did Marbury have a right to receive his commission?
  2. If he did have a right, was there a legal remedy by which he could obtain it?
  3. If such a remedy did exist, was the Supreme Court the correct court to issue the writ of mandamus?

Marshall answered the question of whether Marbury had a right to receive his commission by noting that the commission was only delivered as matter of custom, and that the act of delivery was not part of what made the commission valid. The question of what exactly constitutes a valid commission requires being able to break down "a commission" into both the procedural process and actual physical commission itself. We're not going to do that here in this email (it's done in the court's opinion, which you should read), but I mention it here as a tool to put in your "think like a lawyer" toolkit.

The second question was answered by an appeal to "ubi jus ibi remedium", which is Latin for "where there is a right there is a remedy" which suggests that since the answer to the first question establishes that Marbury does in fact have a legal right to his commission, then the law must possess a way to ensure that he receives it. Marshall confirmed that the legal remedy available to Marbury was that a court should issue a writ of mandamus. This is particular type of court order by which the court can command that a government official to perform an action which they are legally required to do. The type of court order is very specific in that it can only compel a government official to take some action and cannot compel the government official to act in a particular way. For example, a court could issue a writ of mandamus to order a government official to perform a zoning inspection, but the court cannot compel that government official to find that the inspection passes.

The answer to the third question, which is to decide if the Supreme Court is the correct court to issue the writ of mandamus as the legal remedy in answer to the second question. Marshall arrives at the court's opinion here by completing something akin to a "proof by contradiction" from mathematics. A proof by contradiction only works because something cannot be both true and false — it must be either one or the other (maybe there is also a "neither" option, but that discussion seems out of scope for this newsletter). The general structure of a proof by contradiction is:

  1. State what you intend to prove as true (we'll call this P)
  2. Assume the opposite of what you're trying to prove is true instead (we'll call this ~P)
  3. Show that the negation leads to two statements that directly contradict each other (we'll call them Q and ~Q) .

Then, because something cannot be both true and false, we know that either of Q or ~Q is true, and the other is false. Because a true statement cannot imply a false statement, this means that our ~P cannot be true since it implies both Q and ~Q (and at least one of them is false remember). So, since ~P cannot be true, then P must be true — which is the thing that we were trying to show is true to begin with.

Marshall follows a similar form for his argument for his opinion. The P is this instance is "The Supreme Court's mandate can be widened by legislation" Then, by using Section 13 of the Judiciary Act of 1789, Marshall showed that the Supreme Court had original jurisdiction for this case and therefore had the authority to issue a writ of mandamus in this case. We'll call our Q statement the phrase "the Supreme Court has original jurisdiction over this case."

Next, Marshall used Section 2 of Article III of the U.S. Constitution to show that the Supreme Court only has appellate jurisdiction over this case. Under appellate jurisdiction, a court can only hear an appeal from a party on a decision by a lower court and to revise or correct the previous decision. This means that under appellate jurisdiction, the Supreme Court did not have the authority to issue the writ of mandamus that Marbury was seeking. We'll call our ~Q statement the phrase "the Supreme Court has appellate jurisdiction over this case."

Because a court cannot have both appellate and original jurisdiction over a case, then one of our statements regarding the Supreme Court's jurisdiction over this case must be false. Marshall then decided that the argument backed by the Constitution, namely that the Supreme Court only had appellate jurisdiction over this case and therefore lacked the authority to issue the writ of mandamus that Marbury was seeking, by writing

If two laws conflict with each other, the courts must decide on the operation of each. ... If then, the courts are to regard the constitution, and the constitution is superior to any ordinary act of the legislature, [then] the constitution, and not such ordinary act, must govern the case to which they both apply. (Marbury, 5 U.S. at 177–78)

The importance of Marbury is contentious as the opinion didn't create the power of judicial review, it only affirmed how the Supreme Court interpreted the Constitution to allow the pre-existing power of judicial review. However, Marbury does help solidify the role that the Supreme Court views at its role to play in the federal government.

My key takeaways

  • a legal instrument (such as the commission in Marbury) has both a creation process and a physical form; Either or both may be relevant to the question at hand.
  • judicial review: A court's power to review the actions of other branches or levels of government; esp., the courts' power to invalidate legislative and executive actions as being unconstitutional.
  • when considering a given situation, it is important to break the situation down into it's distinct legal questions.
  • legal arguments seem very similar to mathematical proofs

New words and phrases

mandamus

ubi jus, ibi remedium

judicial review

original jurisdiction

appellate jurisdiction

Next section

Section 2 is Supreme Court Authority to Review State Court Judgments , and it sounds riveting. As always, you can find all of the raw notes for this section and subscribe to write-ups of future sections at https://law.zachwick.com and on Substack.

· 12 min read
zach wick

Abstract

Language informs the internal structure of knowledge. This structure is both a product of the knowledge it contains, and is the resulting artifact of that knowledge. These human languages are themselves comprised of fragments that each have their semantic and terminological meanings. These meanings can be condensed into an intermediary form, which contains those same represented meanings and therefore has the same inherent knowledge structures. Given basic operational rules, these condensed intermediary language representations can be expressed as general purpose programming languages. Because this condensed notational form of language contains representations of the structure and content of the original human language, all fully faithful expressions of that intermediary form must also contain representations of the initial underlying knowledge structures. This paper puts forth that programming languages contain representations of the human languages that they are derived from, and the knowledge structures of those initial human languages can be found in the knowledge structures implied by the programming language itself as shown by both the Catala programming language and a hypothetical programming language derived from the human language Lojban.

Language informs the internal structure of knowledge

Knowledge organization and language are intimately intertwined. Thellefsen argues that knowledge organization must be informed by linguistic theory and semantics in order to represent the entirety of the knowledge within a given domain ( 2003, p. 211). The language used within a domain is called sub language or special language (Thellefsen, 2003). This definition of special language means more than just the words that are unique to that domain; it also encompasses the change in meanings of existing words when used within the given domain. For instance, the word "inheritance" has similar yet distinct meanings in the domains of law and computer science.

This difference in meaning between these two domains is because the word "inheritance" has both a semantic meaning and a terminological meaning. The semantic meaning of the word is common to both domains; the word in both domains evokes generational ownership and motion. However, the terminological meaning of the word in both domains are different. In the legal domain, "inheritance" is something very specific and deals with beneficiaries and estates. In computer science, " inheritance" means object oriented programs and generic classes of objects that have distinct instantiations as objects.

When considering the functions of a given word within the languages of specific domains, a framework for this analysis is useful. The framework used in this paper is KRL, a framework provided by Bobrow and Winograd, which provides a structured way of associating descriptions with conceptual entities as an organizational strategy for declarative knowledge (1977). In KRL, a description is a group of descriptors, which are each a statement of some fact that is either an observation about an object, or a fact about the object that is only useful when used as a comparator (Bobrow & Winograd, 1977). Because a description may be comprised of multiple descriptors, it allows describing a complex event through multiple points-of-view simultaneously (Bobrow & Winograd, 1977, p. 6). This ability of KRL to simultaneously describe a single event through multiple view points lends itself well to describing special language since studies of special language are concerned with both the terminological and semantic meanings of the given piece of language.

KRL is put forth by Bobrow and Winograd as a general purpose program language. They accomplish this by specifying the rules and operations under which groups of descriptions, called units, can be interacted with (1977, pp. 6-9). They propose a syntax and a grammar for KRL and note that "We believe that it is more useful and perspicuous to preserve in the notation many of the conceptual differences which are reflected in natural language, even though they could be reduced to a smaller basis set" (p. 7). Thus, KRL represents a way to reduce natural language to a rigorous formalized grammar and syntax via a notation set.

There are other general purpose programming languages that have begun life as a notation set as KRL has. APL, created by Kenneth Iverson is a salient example. Iverson's main thesis in creating APL is that "Mathematical notation provides perhaps the best-known and best-developed example of language used consciously as a tool of thought" (1980, p. 444). In fact, Iverson originally created the notation used by APL as a teaching aid, and the notation set was only implemented as a programming language after some years of use (1980, p. 445). In his work, Iverson notes that in general, the advantages of programming languages as tools of thought are that programming languages are universal (that is, they are general purpose), they are executable, and they are unambiguous (1980, p. 445). Iverson's treatment of APL as a notational tool of thought provides the mechanisms for combining a notation set with a programming language. Iverson suggests that a good notation must have at least a few common characteristics.

The first characteristic of a good notation is that is "must allow convenient expression not only of notions arising directly from a problem, but also of those arising in subsequent analysis, generalization, and specialization. (1980, p. 446). This is true of any general purpose programming language, as it is the definition of "general purpose." Iverson also specifies another facet of general purpose programming languages that lends itself to being a characteristic of a good notation; he puts forth that a good notation is suggestive. By this, he means that a good notation can "represent identities in brief, general, and easily remembered forms" (1980, p. 447). This means that a solution for a problem, given in a particular notation, can be recognized as the notational form of a solution for that particular problem and any other similar problem.

Much like how Bobrow and Winograd noted that their notational form of KRL was larger than strictly required (1977, p. 7) , Iverson writes that APL, and indeed any good notation, should subordinate detail and be economic in its vocabulary ( 1980, pp. 448-449). By this Iverson means that a notation should be able to express a large number of ideas in terms of a relatively small vocabulary (1980, p. 449). The mechanism by which this expression happens is by introducing a set of grammatical rules to be used in coordination with the notation set (1980, p. 449). A set of grammatical rules that govern the use of a language is central to the idea of a language in general.

Finally, Iverson suggests that a good notation is amenable to formal proofs (1980, p. 450). In the narrowest interpretation of that statement, one need only look at the origins of Iverson's APL language as a mathematical teaching aid to see the practical effects of this characteristic. However, when considering a programming language itself as a notation set, then this characteristic instead means that within the grammatical rules of the language itself, precise and unambiguous statements can be made.

The characteristics of a good notation are also the characteristics of special language. This means that a notation ( and its grammatical rules) comprises of both the vocabulary of the notation itself and the meanings of those notational elements. If a notation wasn't a special language contained within the general human language, then there would be no need for the notation to have a vocabulary or rules that are different than those of the general human language that it is derived from. The characteristics of a good notation are also the characteristics of general purpose programming languages. This means that a general purpose programming language is a notation for a human language, and is also a special language of that human language.

While the ability to be unambiguous is vitally important in general purpose programming languages and notations, it is often less important for human languages to be unambiguous at all times. The human language Lojban however, was constructed for logic based unambiguous human-to-human and human-to-machine communication (Hintz, 2014, p. 18). Because Lojban has an unambiguous grammar, it is trivial to parse and easy to learn. The regular morphology, minimal regular syntax, and the explicitly minimized semantic ambiguity in Lojban all contribute to it being well suited for accurate and efficient communication (Hintz, 2014, pp. 18-21).

Given the features of Lojban, it is clear that Lojban as a language also fits many of the characteristics of a special language as well as a notation and programming language. Even when treating Lojban as a notational language however, the semantics of the language are still important. This importance has the implication that the semantics of the original language are still present and important in the notational form of that language. This is true for all programming languages, and not only for condensed human languages that can function as programming languages.

In the programming language Catala for example, legal texts can be translated to executable forms. This translation is achieved by a notation that is referred to as "the Catala programming language" (Merigoux, Chataing, and Protzenko, 2021, p. 1). The stated aim of the creators of the Catala programming language is to "bring together lawyers and programmers through a shared medium, which they can understand, edit, and evolve, bridging a gap that too often results in dramatically incorrect implementations of the law" (Merigoux et al., 2021, p. 1).

It is seemingly obvious that legal language is a sub language or special language derived from a base general purpose human language. Catala has been proven to be correct in its core compilation steps by the F* proof assistant ( Merigoux et al., 2021, p. 1). This implies that Catala is amenable to formal proof and that it can express ideas unambiguously. Catala as a notation is also suggestive because it has clear semantics and "compiles to a generic lambda-calculus that can then be translated to any existing language" (Merigoux et. at., 2021, p. 3).

In legal texts, there are several structures that are not typically present in other texts. The first of these atypical structures is "out-of-order definitions" (Merigoux et al., 2021, p. 3). In this type of structure, the general case is given first, followed by an enumeration of limitations or exceptions. The creators of the Catala language describe this structure as "relevant information is scattered throughout, and [one section] alone is nowhere near enough information ..." (Merigoux et. alo, 2021, p. 3).

The second atypical structure found in legal texts is "back-patching" (Merigoux et al., 2021, p. 3). In this type of structure, a section of text is modified in place by other text that comes after the section to be modified. A modified version of this structure can be made by combining "back-patching" and "out-of-order definitions" to create " out-of-order back-patching" which may change the entirety of a section of legal text based on one out-of-order piece of information (Merigoux et al., 2021, p. 5).

The final atypical structures found in legal texts are "re-interpretation" and back-patching re-interpretation (Merigoux et al., 2021, pp. 4-5). In these structures, a section of text can recursive or re-entrant and can back-patch preceding texts.

Finally, in the context of the special language of legal texts, the underlying logic model is one of default logic. This non-monotonic logic has been refined in the context of legal purposes as "prioritized default logic" (Merigoux et al., 2021, p. 5).

All of these textual structures are present in the designed elements of the Catala language. In fact, the main design goal of the Catala language is exactly to provide a programming language the uses prioritized default logic and is tailored for use in law by both its syntax and semantics (Merigoux et al., 2021, p. 5).

Further Research

While Catala is a programming language specifically created for use in the legal domain, it may be useful in other domains. One could argue however that the application of Catala to a different set of problems is simply the application of existing law to the domain that the problems exist in. Catala is interesting in this regard as it has been used to verify the implementation of legal structures in both English and French law. This is novel because of the assumption that the English and French languages have different structures, semantics, and syntax. Juxtaposed against this one-to-many relation of Catala to both English and French, is Lojban existing as a human language and a programming language simultaneously. When compared to English or French speakers learning Catala or APL to express their ideas, it would be interesting to study native Lojban speakers (should any exist) learning how to express their ideas in Lojban as a notational form that a computer can execute and verify.

Conclusion

By looking at the concrete examples of Lojban as a constructed human language that can be simultaneously a programming language and Catala as a constructed programming language made to match the structures of legal texts, this paper has attempted to show that a programming language is simultaneously a notation and a special language derived from a general purpose human language and that the knowledge structures present in the original human language can be found in the resulting programming language.

References

Bobrow, D.G. and Winograd, T. (1977), An Overview of KRL, a Knowledge Representation Language. Cognitive Science, 1: 3-46. https://doi-org.ezproxy.bgsu.edu/10.1207/s15516709cog0101_2

Hintz, G. (2014). Semantic parsing using Lojban – On the middle ground between semantic ontology and language ( Master's thesis). Retrieved from https://www.inf.uni-hamburg.de/en/inst/ab/lt/teaching/theses/completed-theses/2014-ma-hinz.pdf

Iverson, K. E. (1980). Notation as a tool of thought. Communications of the ACM, 23(8), 444–465. https://doi.org/10.1145/358896.358899

Merigoux, D., Chataing, N., & Protzenko, J. (2021). Catala: A Programming Language for the Law. ArXiv, abs/2103.03198.

Thellefsen, M. (2003), The role of special language in relation to knowledge organization. Proc. Am. Soc. Info. Sci. Tech., 40: 206-212. https://doi.org/10.1002/meet.1450400126

Notes

This work was created for Instructional Design 6010 Principles of Learning Design at Bowling Green State University to fulfill a requirement to research and discuss an area of research germane to the course and to one's personal interests.

· 4 min read
zach wick

Educational behaviors rooted in the theory of behaviorism, or at least the operant conditioning branch of it, are regularly used in software product design decision-making processes. Product onboarding, as an educational experience, is an archetypal application of these theories. For instance, within the sequence of informational popups that show a new user how to use a piece of software, the button with the positive, forward-in-the-onboarding-process call-to-action (such as "Next", or "Okay") should be consistently placed and styled, and match the styling of the rest of the product. This has the result that a user is gently nudged towards learning the visual cues and language that the product they are being educated on uses. As described by Chen, such reinforcement of the visual language used within the software would positively influence the perception category of Bloom's psychomotor domain, which deals with "the ability to use sensory cues to guide motor activity" (2005, p. 133).

The order in which new users are guided through the main features of the software product also influences the educational impact of the onboarding. In Bloom's affective domain, which includes a learner motivation dimension among others, the organization category "organizes values into priorities" (2005, p. 133). Because learner's motivation is increased when learning a high priority item, the new user onboarding process should show the most important feature (or features) first. This helps ensure that new users are able to not only see value in becoming revenue generating customers of the software product, but that they are educated the most effectively on how the software can be used to solve their business need.

Behaviorism is present in significant ways in many popular software based educational tools. For example, the online MBA program offered by Bowling Green State University relies heavily on the Connect and SmartBook products from McGraw-Hill. SmartBook is a web-based software product that uses programmed instruction as written of by Chen (2005). This behaviorist style of instruction uses a series of sets of questions each associated with a section of reading. In order to progress forward through the instruction, the learner must answer the set of questions correctly and may be required to re-experience the associated reading if the questions are answered incorrectly multiple times. This type of instruction is appropriate for the content of this program which is often centered around analyzing a financial process.

As described by Chen, behaviorist theories of learning position spaced repetition as an integral part of the learning process (2005). The idea of using repetition to impact learning can be leveraged with consistent use of language cues in software product documentation.

Consider the software documentation for a generic REST API for a todo list tool. Using this REST API, a user can create new, read existing, update existing, or delete existing todo items for themselves. The documentation for such an API, should start with the most important use case first, to influence the organization category of Bloom's affective domain as discussed above. In our example, this means that the first bit of documentation should be how to create a new todo list item.

In the language of REST APIs, a task to be done in our example is called a resource. When talking about the resource in the documentation for this API, the capitalization and styling of the resource's name should be consistent and visually distinct from other uses of that resource's name as a word. This means that when referring to the resource that corresponds to a task to be done, it should be consistently referred to as "a Todo" with a capital letter as an English proper noun.

In the contrived example of a todo list REST API, the benefits of this kind of repetition may not be evident. However, consider the REST API for a payment processing product. In that context, many of the resources needed while moving through a payment process are commonly used nouns and verbs in addition to being important foundational concepts of how the software product works. A good example is the API resource of a Charge which is the software representation of taking a payment from a customer. In the documentation around how to interact with a Charge API resource, there are sentences that read similar to:

To charge a Customer, create a Charge object.

In this example documentation sentence, "charge" is used as the English verb while Charge is typographically identified as being the API resource. In keeping with the behaviorist emphasis on repetition to influence learning, being consistent in the typographic indications of what object or abstracted concept is being referred to allows readers of the software documentation to have a positively impacted educational experience.

References

Chen, I. (2005). Behaviorism. In Encyclopedia of Distance Learning (pp. 127-147). IGI Global. ( https://www.igi-global.com/chapter/behaviorism/12097)

· 6 min read
zach wick

In their work Scenario-based e-learning: Evidence-based guidelines for online workforce learning, Clark defines Scenario-based e-learning and discusses its constituent parts. Most of the discussion however focuses on examples that are outside the domain of programming instruction. This brief exploration of Advent of Code as an instance of Scenario-based e-learning seeks to ground Clark's discussion to this area of instruction.

In the yearly Advent of Code event, a scenario is given and then there are guided steps, each with their own instructions, by which participants build a stepwise programming solution to the general scenario.

While there is no explicitly stated learning objective, the creator of Advent of Code describes it as

"Advent of Code is an Advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like. People use them as a speed contest

, interview prep , company training

, university coursework , practice problems, or to challenge each other."

Because there is no explicitly stated learning objective and participants can use the Advent of Code scenario to achieve their own desired ends, the guidance provided in the scenario changes Advent of Code from a "discovery learning" scenario to a "guided learning" scenario (Clark, 2012, p. 122).

In the 2021 instantiation, the scenario starts with the following prompt as the scenario's trigger event.

You're minding your own business on a ship at sea when the overboard alarm goes off! You rush to see if you can help. Apparently, one of the Elves tripped and accidentally sent the sleigh keys flying into the ocean!

Before you know it, you're inside a submarine the Elves keep ready for situations like this. It's covered in Christmas lights (because of course it is), and it even has an experimental antenna that should be able to track the keys if you can boost its signal strength high enough; there's a little meter that indicates the antenna's signal strength by displaying 0-50 stars.

Now that the scenario is established, the actual task description is given as "Your instincts tell you that in order to save Christmas, you'll need to get all fifty stars by December 25th.". Each day from December 1 to December 25, two correlated puzzles are offered that build off of the previous days' puzzles to guide participants towards a complete solution.

For the first day's first puzzle, scenario data in the form of a CSV file of depth readings is provided. This data file looks like:

199
200
208
210
200
207
240
269
260
263

The guidance and instruction for the first puzzle is then provided, in which it reads in part

The first order of business is to figure out how quickly the depth increases, just so you know what you're dealing with - you never know if the keys will get carried into deeper water by an ocean current or a fish or something. To do this, count the number of times a depth measurement increases from the previous measurement. (There is no measurement before the first measurement.) In the example above, the changes are as follows:

199 (N/A - no previous measurement)
200 (increased)
208 (increased)
210 (increased)
200 (decreased)
207 (increased)
240 (increased)
269 (increased)
260 (decreased)
263 (increased)

This guidance, especially the inclusion of an example programming script output that illustrates part of the task, successfully mitigates the "flounder factor" by showing participants what is expected of them. Clark suggests that "One of the most important success factors in scenario-based e-learning is sufficient guidance to minimize the flounder factor" (2012, p. 30). In order to complete the first day's first puzzle, participants must write a programming script that results in an answer to the question of "How many measurements are larger than the previous measurement?". The provided response to this question is automatically checked by the scenario's own programming.

If an incorrect response is provided, the scenario provides feedback that is more than simply an indication of whether or not the response was correct. Clark writes that "feedback has little value unless the learner reviews the feedback and considers how his or her actions or decisions led to the outcomes seen" (2012, p. 81). This scenario applies this insight by providing formative feedback with additional guidance of

That's not the right answer; your answer is too low. If you're stuck, make sure you're using the full input data; there are also some general tips on the about page, or you can ask for hints on the subreddit."

The addition of hints such as "make sure you're using the full input data", guide participants to consider where and how their proposed solution deviates from the ideal solution based on the other bits of feedback such as "your answer is too low." This inclusion of feedback statements such as "your answer is too low", or "your answer is too high" ensures that the provided feedback is similar to what Clark refers to as intrinsic feedback in which there is a visual representation of "how the scenario plays out or responds to the learner's actions" (2012, p. 80).

This scenario does not have an explicit "reflection" phase, which doesn't detract from its effectiveness as an instantiation of Scenario-based e-learning. This is because "while some components, such as task deliverable, trigger event, and feedback, are required elements, others may vary according to your learning domain and context" (Clark, 2012, p. 72). In the context of this scenario, the only reflection is implicit in each days' puzzles building off each other and therefore each puzzle's solution must be adapted to become the solution for the next puzzle.

This scenario could be improved by providing a venue in which participants must explicitly reflect on their provided solution to a puzzle. Such an explicit venue could take the form of asking participants to remark on the space or time complexity of their solution (commonly referred to as "Big-O notation"), or on the elegance and structure of their solution's code. Given that the understated goal of the entire Advent of Code scenario is to enable participants to use programming code to solve a real-world problem, such reflection encourages participants to consider the real-world consequences of their particular solution.

References

Clark, R. C.(2012). Scenario-based e-learning: Evidence-based guidelines for online workforce learning (1st ed.). San Francisco, CA: Pfeiffer.

· 4 min read
zach wick

In robust engineering organizations, engineers contribute code to a shared work product through a process call pull requests.

This process is facilitated by pieces of software such as git , mercurial, and fossil, which enable version control of software.

Using these software tools, along with a collaboration platform such at GitHub or GitLab, engineers make proposed changes to the shared codebase which are then reviewed by peers prior to applying these changes to the shared codebase. These changes are submitted as a pull request to the shared codebase, and the review is often called a pull request review.

Because of their nature as an integral step in the work process of every engineer, regardless of their skills or experience, treating the pull request process as an educational one can positively impact every engineer that works on a particular codebase. When viewing the pull request process this way, it becomes clear that it has all of the necessary characteristics of a good assessment.

A good assessment must be objective and have a clearly defined outcome. When an engineer makes code changes that result in a pull request, the changes were made with some goal in mind. That goal is sometimes the addition of a new product feature, sometimes fixing a bug, or sometimes improving performance. In any case, there is an objective goal that the code changes under review are intended to achieve. The peer charged with performing the pull request review similarly has only three available actions. The reviewer can approve the changes with no feedback, approve the changes with feedback, or reject the changes with feedback.

In the first case, the pull request as an assessment is of only limited value. Such a pull request only affirms that the engineer's code changes met the intended goal. In the case of a pull request being approved or rejected with feedback, the inclusion of feedback offers the reviewer the opportunity to provide formative or summative feedback. This has the implication that effort put into providing quality feedback on pull requests has an impact on the quality of future pull requests that take the feedback into consideration.

A good assessment must also be valid, in that it evaluates the actual skill under consideration. Within the context of pull requests, this means that feedback and reviews must be limited in scope to the code changes under consideration. It would be inappropriate for a pull request reviewer to reject a pull request because they didn't like that its creator ate a particular food for lunch.

Using product features of Github, namely pull request templates , Checks, and automated test suites, pull requests can be more soundly formed into an assessment tool.

Pull request templates enable code maintainers to provide a templated familiar set of content that engineers must address and follow in the process of creating their pull request. These templates often include directions to have the proposing engineer explain in their own words what their code changes do, consider any follow-on effects, and provide any other information that a reviewer would need to evaluate the code changes.

The GitHub Checks product feature enables code maintainers to programmatically enforce code conventions and provide instruction to pull request creators that voilate them. This product offers the ability to provide formative feedback at scale.

Automated test suites are of critical importance in any software product, but take on a new role as a means of self-assessment when viewed through the lens of pull requests as an assessment tool. These automated tests run against the modified code and ensure that the behavior is as expected.

Pull request reviews cannot replace targeted instruction. They can however augment previously given instruction and scale its reach to a wide audience. When onboarding new engineers to a team or product, pull request reviews and the feedback contained therein are the most impactful educational experiences related to the engineer's day-to-day job functions. This importance should be honored by giving pull request reviews the respect that they deserve as a powerful educational tool.

· 5 min read
zach wick

If grown organically, documentation experiences tend to be fragmented, inconsistent, and often don't clearly map to how users can use the documented product offering to solve their own technical problems. These issues make the documentation content more difficult to maintain and more difficult to add to in a self-consistent way.

Revamping the documentation experience with an eye towards improving the information architecture and thereby improving the maintainability and extensibility of the documentation content is the best solution to these issues.

At the highest level, moving documentation into its own separate and distinct product from the documented product is an integral initial step in this revamp process. During this migration, it is prudent to adapt the content to use the appropriate content template based on the documentation type and adjust the information architecture of the navigation to realign the documentation experience around solving user's business problems.

What should the infrastructure underlying the documentation experience be? How exactly should the documentation experience be architected?

Building the documentation experience as its own product has the greatest amount of engineering cost, but provides the most flexibility in what the resulting documentation experience can be. This option implies a separate maintenance burden for the documentation apart from the documented product. The resulting documentation product from this option could be released as a free and open source documentation framework that just so happens to have a particular instance that documents a given product. However, this follow-on action also has the implicit resource requirements of any free and open source project - namely maintenance and hosting costs.

Building off of a ready-made solution like TailwindUI's Syntax template would be a one-time payment of ~$799 USD in most cases, and would preclude a free and open source product offering being made from the built documentation experience. Alternatively, using a free and open source project like Docusaurus enables building the desired documentation experience within a no-cost framework.

Alternatively, documentation experiences can be built using an off-the-shelf SAAS documentation solution like readme.io. This option provides the lowest ongoing maintenance burden, but requires an ongoing payment obligation to the SAAS provider. This option allows documentation creators to focus their efforts on the content of the documentation experience instead of having a split focus on the content and the infrastructure. Using a documentation SAAS product also enables the easy collection of documentation specific metrics that can be used to drive internal decisions. These metrics may be available under the other options, but require an engineering effort to gather them. A "Business" license for ReadMe.io is typically around $399 USD/month.

When a user enters a documentation experience, they presumably are attempting to solve a problem. The documentation experience should expedite the search for a solution that features the documented product by aligning the entry into the documentation experience with common user business problems. In addition, the design of the documentation experience entry point should provide a quick overview of the available types of content and encourage visitors to explore of their own curiosity.

Within many pieces of documentation, there are listed steps of more than one separate and distinct way of using a product or feature. When all the documentation content is organized vertically, it is difficult to visually identify where one set of steps ends and where another set begins. A tabbed interface provides a horizontal axis on which to visually separate distinct product/feature usage experiences.

It is also important that the internal structure of documentation content is consistent. For each type of documentation content, there should be a common base template with optional additional components. This helps ensure consistency and helps lessen the maintenance burden of the documentation content. Such templates also provide a way to scale a self-serve documentation creation model in which the documentation team acts more like an editorial agency or collaborator instead of as the primary author because example content and helpful hints can be placed within the templates.

The four general types of documentation content are quickstarts, procedural docs, conceptual docs, and reference docs. Quickstart documentation aims to help readers get up and running with the documented product quickly, and are usually framed in terms of solving a particular specific business problem. These business problems are usually identical to the main use cases of the documented product.

Procedural documentation is what is most often called "documentation". Procedural documentation is step-by-step guides that educate a user on how to use the documented product to solve a particular problem. These are distinct from quickstarts in that the goal of quickstarts is to quickly demonstrate the documented product's usefulness, while procedural documentation is more focused on the eventual outcome instead of execution speed.

Conceptual documentation is intended to explain the concepts and ideas behind the documented product, without specific implementation details. Conceptual documentation should describe how the documented product works, not how to use the product.

Finally, reference documentation seeks to explain which exact product actions have which exact effects. These types of documentation typically are an exhaustive listing and explanation of configuration options and product features.

Constructing a set of templates for each of these four types of documentation enables the documentation as a whole to remain self-consistent while having a distinct medium by which to achieve each documentation type's specific goal.

· 4 min read
zach wick

Modern computers, or rather the computer programming languages that control them, are Turing-complete. This means that computers as we understand and experience them today can solve any problem whose solution can be solved by an algorithm. Because any two Turing-complete systems can simulate each other (this is called equivalence), there exists more than one way to solve a given problem using a computer (or any other kind of Turing-complete system). This necessitates that when learning how to use a computer to solve a problem, the human involved in that decision-making must use a constructivist point-of-view to evaluate the computer-aided solution. That is to say, using a computer to solve a problem is more like building a bridge out of local stone whose design is inspired by previous stone bridges that the architect has experienced; it is not like uncovering a universal truth as in Michelangelo's description of revealing the statue that was already within the block of stone:

“The sculpture is already complete within the marble block, before I start my work. It is already there, I just have to chisel away the superfluous material.”

The educational process of learning computer programming, which is the same as learning how to evaluate if the computer-aided solution to a problem is sufficiently correct or not, requires practice making similarly shaped evaluations in a wide variety of contexts to develop a sense of what constitutes a faithful representation of the problem to be solved and its proposed programming solution. This is similar to the idea of finding success by getting lots of shots on goal. Phrased another way, educational experiences are best for the learners when tools and methods are evaluated on a per-learner basis instead of just assuming that if a given educational experience works for one learner then it will work for all learners.

To encourage learners to practice these skills in ways that grow in complexity proportionally to their skills and experience, tools can be specifically selected that lend themselves nicely to this process. For example, there exists a programming language called APL, which began as a set of handwritten notation to help learners express algorithmic solutions to problems in an abstracted way. The notation of APL lends itself well to the mathematics of arrays, and because of how exactly its notational language can be composed (via grammar rules as in any other language), APL also serves as a powerful computer programming language. It is particularly well suited to the task of teaching computer science concepts to new learners using the language of basic arithmetic and elementary array manipulations. In fact, APL was created as an abstraction of the existing mathematical notation for these areas to facilitate exactly this constructivist style of learning by applying existing knowledge from one field of study to a new field in order for learning to occur.

When applied to documentation, the takeaway is to present the same content in a variety of ways to increase the likelihood of a learner finding the documentation useful. The easy trap is to churn out a laundry list of cookie-cutter-shaped retellings in different media formats - screencasts that are simply readings of blog posts, live talks that are just live versions of the screencasts, and so on. Instead, take advantage of the unique features of each medium to enhance the material. For screencast-shaped documentation, this means longform content that shows a worked example of a real-world problem that a user can solve using the product. For live talks, real-time interactivity is the main unique feature.

As always, the message of any communication must be tailored to its audience. However, the medium of the message acts as the limiting reagent for the act of tailoring the message to its audience, and so to optimize for the impact of documentation is to carefully choose the least limiting reagent for the intended audience.

· 3 min read
zach wick

That sentence might be a vague instruction coming down the chain-of-command to you. That sentence might something that you aspire to do. In either case, that statement of "Write better docs." might be something that you’re actively working to do. If you find yourself in this final group, then this text is for you.

You already know that the documentation that you create should be useful, but what does that mean in practice? Does it mean that users can use your documentation to solve their implementation issues? Does documentation being useful mean that your documentation helps users improve their business processes? Or does useful documentation only mean that its existence is a positive value proposition by your sales counterparts?

Useful documentation can do all of these things and more.

Documentation is the force multiplier for marketing efforts when the product being marketed is a shovel or pick in a gold rush. Good documentation can make a poorly marketed product sell decently and great documentation can augment great marketing to best-in-class.

For businesses making software building blocks and selling them as these picks and shovels in the Big Tech/Silicon Valley software-as-the-savior gold rush, best-in-class useful documentation rewards as both an economic return and as an increase in the quality of the portion of the customer base that requires support.

Making useful documentation isn’t just getting rid of passive voice and putting in lots of screenshots. Making useful documentation requires a full reframing of documentation as a product experience where the reader is taken on a journey through the implementation and nuances of how your product applies to their scenario.

Documentation is usually treated as a hastily thrown together, constantly out-of-date content-island. Treating documentation as an educational experience instead allows for the documentation experience to instill lasting impressions in its participants. For readers of useful documentation, their personal educational motivations are aligned with their immediate business needs and the spirit of the documentation. By teaching these participants how to apply your product as the best possible solution to their particular business problem, your product's positive associations with that user are increased and you have a perfect opportunity to educate them on the why of your product's features. These active learners form an audience of highly motivated learners and are a captive audience for marketing messages.

A user who understands why and how your product functions is educated on your product with declarative knowledge of what your product is and procedural knowledge of how to use your product's features. Such knowledge transfer is the purpose of the field of study of Instructional Design.

This means that if documentation experiences are treated as educational experiences, then they can be made more impactful by analyzing them through the toolset and frameworks of the instructional design world.