When I first started working as a software engineer, I didn’t need to think about how to grow. Every single task assigned to me at work was new and challenging, and simply trying hard to accomplish those tasks was enough to let me grow rapidly as a software engineer. But now that was no longer the case. Even at the previous job, I felt that I was stagnating. I thought the problem lay with that project, it being a typical React.js front-end project with nothing new to me. Having read through dozens of job descriptions from other companies, I now believe that this is rather a problem of the web front-end in general.
Web front-end development is by no means easy. To be a competent front-end engineer, you need to be proficient with HTML, CSS, JavaScript/TypeScript and at least one front-end framework, along with reasonable understanding of the Internet, HTTP communication, session management, client-side state management, testing, building and deployment, source code management, security, and performance optimization. I believe that it usually takes between three to five years to reach that point with full-time employment and some personal studies after work.
The problem, however, is that your growth from your job slows down rapidly at that point. The main reason is that, as I’ve noticed, 95% of front-end development can be handled at that skill level. You occasionally meet some challenges in some minor parts of the job, but that’s just some incremental growth, not the explosive one you’re used to. It’s not surprising though: whereas you were growing 10 hours a day with 8 hours from work and 2 hours from personal studies, now you are only growing 2 hours a day.
Unfortunately, challenging front-end projects are truly far and few between. I’m talking about projects like Figma, Notion, Google Docs, and so on. They go beyond typical DOM management and API communication to require extremely responsive and performant UI (Figma), complex DOM manipulation and update (Notion), or real-time handling of conflicting data updates (Google Docs).
This was a significant problem to me. First, like most software engineers, I love meeting challenges, learning new things, and growing as an engineer. Repeseeatedly working on similar tasks eight hours a day started to bore me. Second, I knew that my career would stagnate along with my growth if I kept going like this. Having spent several years in other jobs, I had little time to waste in growing as a software engineer.
As I saw it, there were four different ways to further grow from this point:
It was a difficult decision to make, as all options had pros and cons. I cross out the first option first, because I wanted to learn new things. The second option was the next to go, as I got rejected from the only such project that I could apply to. I decided to go for the third option, looking for a job where I could work on other areas of software, especially web back-end. As of now, the job search is still ongoing but I hope to land a job that meets these requirements.
I had no idea about this growth bottleneck problem before I suddenly had to face it. I hope that other web front-end engineers who read this post would have more time to understand and handle this problem before they face it like I did.
]]>I have been fascinated with Haskell for years. I started dabbling in it when I worked with Elm several years ago and fell in love with type-oriented programming. Since then, I have studied various introductory materials such as UPenn CIS 194, Learn You a Haskell, as well as a dozen blog posts and articles. Recently, I also read Get Programming with Haskell to refresh my Haskell knowledge.
Learning the basics of Haskell was challenging, but even after gaining confidence, I found it difficult to write non-trivial Haskell programs. For instance, implementing algorithms in Haskell was much more challenging than I had expected. Many algorithms required the use of mutable arrays, which is not covered in basic Haskell. The Array package was particularly daunting, as it provides four different interfaces consisting of mutable and immutable arrays that can be either strict or lazy. I had to use the mutable unboxed array based on the state monad, but I struggled to fully understand it before using it. Eventually, I gave up on fully understanding it and focused on learning how to use the library, which allowed me to just get done with implementing algorithms.
Since then, I have read several intermediate-level articles on Haskell on and off. However, most of them felt like they were written by brilliant and well-meaning professors who are terrible at teaching. For example, when I searched for “What is RankNTypes in Haskell,” the first result was the HaskellWiki page on RankNType. It provides a brief definition that assumes the reader already knows what universal quantification is. It also includes longer sections on Church-encoded lists and RankNType’s relation to existentials, which are yet more new topics. Other materials tend to have similar pedagogical shortcomings, such as providing concise academic definitions but no examples, introducing multiple new concepts to answer the original question, or going off on a tangent without properly explaining the concept.
After deciding to go beyond basic Haskell, I began searching for intermediate-level learning materials. I was concerned that there might not be good resources available, but the Haskell community has been getting better in organizing resources over the years. When I first delved into Haskell, there was a lot of discussion about the lack of a good, widely accepted introductory material. However, nowadays Haskell Programming from First Principles (HPFP) has emerged as the standard introductory book. I hoped to find a similar trend in intermediate-level materials.
I was pleasantly surprised to discover several recently published intermediate-level books on Haskell:
I had several criteria for selecting a book. Firstly, it should cover commonly used techniques and idioms, including state monads, monad transformers, advanced IO handling, and proficient usage of type classes. Secondly, it should focus on practical aspects of Haskell, such as testing, error handling, and other topics, while minimizing theoretical discussions. Lastly, it should provide numerous code examples with appropriate complexity, as I’ve had negative experiences with articles that present overly complex examples.
After reviewing the introductions and table of contents, I decided to go with “Haskell in Depth”. I skipped “Production Haskell” because it had a section on team-building that didn’t immediately interest me. “Practical Haskell” lacked a substantial introduction, so I wasn’t sure what to expect. “Functional Design and Architecture” seemed to prioritize architecture over language usage, making it a better choice for later exploration.
Overall, the book is a well-written textbook. The sentences are easy to read, using vocabulary commonly found in engineering blog posts rather than in academic papers. The page layout is visually appealing, reflecting the high quality work of the editors at Manning Publications. The progression of topics and pace of explanation are appropriate. The author, Vitaly Bragilevsky, has over 20 years of teaching experience in universities, and his expertise is evident in the book’s quality. In the preface, he lays out his practical approach to the topics covered in this book:
Two unfortunate myths contribute a lot to its limited adoption: - It is hopeless to program in Haskell without a PhD in math. - Haskell is not ready/suitable for production. I believe that both of these claims are false. In fact, we can use Haskell in production without learning and doing math by ourselves. … The truth is, we can apply those mathematical concepts to our code without worrying too much about them. Math is good for applying; it was created and developed over the centuries precisely for that. Nobody bothers about prime numbers and the problem of factorization when buying something with a credit card nowadays.
The book is 600 pages long and consists of 16 chapters, covering a wide range of topics. It starts with an introduction to basic Haskell features and then delves into structuring applications, organizing projects, error handling, testing, profiling, and extensions for advanced type-level programming. It also explores metaprogramming and demonstrates how to use popular libraries for data streaming, concurrency, and database interaction. Additionally, the book includes practical topics such as testing, profiling, and error handling.
There are several parts of the book that I really liked.
First, I liked the occasional comments on Haskell’s language warts. Instead of using the default String
type, which is a list of Char
type and can be slow for serious text processing, the book recommends using Data.Text
and Data.ByteString
. The Prelude module exposes many unsafe functions and types, such as head
that crashes when used on empty lists. To avoid potential issues, it is often better to disable the Prelude module with the NoImplicitPrelude
language extension and use alternative custom preludes. You can learn about more language warts from a series of blog posts here.
Second, I enjoyed the author’s coverage of commonly used techniques and idioms. The book provides numerous examples demonstrating the use of basic type classes like Eq
, Enum
, Bounded
, Show
, Semigroup
, and Monoid
. These examples showed me alternative ways to write functions using these type classes, allowing me to become more familiar with them and use them in my own code.
-- With monadic binding
locateByName :: PhoneNumbers -> Locations -> Name -> Maybe Location
=
locateByName pnumbers locs name lookup name pnumbers >>= flip lookup locs
-- Without monadic binding
locateByName' :: PhoneNumbers -> Locations -> Name -> Maybe Location
=
locateByName' pnumbers locs name case lookup name pnumbers of
Just number -> lookup number locs
Nothing -> Nothing
-- With fold
rotateMany :: Direction -> [Turn] -> Direction
= foldl (flip rotate)
rotateMany
-- With mconcat
rotateMany' :: Direction -> [Turn] -> Direction
= rotate (mconcat ts) dir rotateMany' dir ts
Using Reader monad to implement read-only access to application-wide configuration would have been very useful to know when I was working with Elm years ago. I remember how much of a slog it was to pass the boolean config value for light vs dark mode through multiple layers of functions to just reach the ones responsible for rendering the UI.
Monad transformers are often mentioned in production Haskell, but they were difficult to understand. Although I couldn’t fully grasp them just from reading the book, studying multiple examples by a single author helped me understand them much better than reading disjointed blog posts by different authors. With some practice exercises, I should be able to use them confidently.
Third, I found the discussion on how Haskell handles common software engineering practices such as testing, error handling, profiling, and organizing the build process and file structure valuable. While these practices are not particularly different in Haskell compared to other languages, it can be time-consuming to determine the best practices and most commonly used libraries for each task. The book provides clear answers on these topics.
Fourth, I also appreciated the in-depth explanation of how GHC Haskell uses memory at runtime. I learned that GHC uses closures as the main unit of memory usage in the heap, and that they can represent unevaluated thunks, fully evaluated normal forms, or partially evaluated weak head normal forms, each with a different memory footprint. I also corrected my misunderstanding of seq
, a function that forces evaluation in Haskell. Instead of fully evaluating the expression as I had thought, it stops after evaluating to the weak head normal form.
Fifth, the explanation of Haskell metaprogramming was also enlightening. Although I found data-type-generic metaprogramming confusing, its usage of abstract syntactic trees reminded me of how metaprogramming works in Elixir. Template Haskell, on the other hand, was much more complex, and the author provided plenty of warnings about its fragility when it comes to GHC version changes. This echoed many other warnings against using it in production.
Sixth, the book briefly touched on what is possible with advanced type-level programming in Haskell. Chapters 11 and 13 provided a brief overview of various language extensions, including DataKinds
, PolyKinds
, TypeFamilies
, ScopedTypeVariables
, KindSignatures
, TypeApplications
, TypeOperators
, AllowAmbiguousTypes
, ExplicitForAll
, GADTs
, and GADTSyntax
. In one way or another, they all allow extending, manipulating, and dictating the types themselves. Although it was impossible to fully understand all of them on the first pass, it gave me a direction to explore if I wanted to delve into heavy type-level programming.
Nevertheless, there are some parts of the book that I did not like. First, there is a lack of exercise problems. Although the author does a great job explaining the presented codes, passive learning is too ineffective at solidifying the lessons learned compared to active learning. Personally, I took an alternative approach by typing all the source code presented by the author while thinking about how I would implement rest of the code. However, it’s not the same as solving carefully curated exercise problems.
Second, the folder structure of the source code is a bit confusing. Codes used in a single chapter are placed in folders like /ch01
or /ch13
, while those used across multiple chapters are placed in /stockquotes
or /ip
. Interestingly, they are all listed as separate executables and internal libraries within a single Haskell project. I’m not sure if it was intentional, but it was an interesting example that shows how to organize a complex Haskell project.
Third, the example code is somewhat outdated in terms of tooling and GHC version. For installing GHC, the author refers to the Haskell Platform, which was deprecated in 2022. The latest tool is GHCUP. Additionally, the source code is targeted at GHC 8.6, released in 2019, while the current stable release is 9.2.8 and 9.4.5. I used GHC 9.4.5 for easier use with recent versions of Haskell Language Server, but had to tinker with versions of some dependencies to get the code compiled. This also gave me a brief exposure to how fragile Template Haskell is to GHC version updates and why it’s not recommended for production usage.
Fourth, the pacing for the three chapters in Part 4: Advanced Haskell was noticeably faster compared to other chapters. Although the author does state that the goal is to give a brief overview the features, the difficulty and density of the covered concepts make it hard to keep pace. Dropping Chapter 13 on dependent types, of which adoption seems to have lost momentum in Haskell community since the author started writing the book in 2019, could make more room for other stable features covered in Chapter 11. Moreover, Idris language provides a more natural experience for learning about dependent types.
Overall, the book was enjoyable to work with. It provides a cohesive and progressive introduction to intermediate-level Haskell, which is something that a collection of blog posts and articles by different authors written in different GHC versions cannot achieve. Although I am still far from being proficient in Haskell, the book has given me a solid foundation to confidently delve into more advanced Haskell concepts.
Now, where should I go from here? The author offers some recommendations for further study, such as Parallel and Concurrent Programming in Haskell for exploring concurrency, Functional Design and Architecture for industry-level design, and Type-Driven Development with Idris for an introduction to dependent types. I also have a few other books on my list: Thinking with Types for more advanced type-level programming, Algebra-Driven Design for in-depth functional programming, and Category Theory For Programmers for an introduction to the theory behind the terms used in Haskell. Hopefully, I will enjoy reading them as much as I enjoyed this book.
]]>This book stands out for its emphasis on pragmatism. All topics are introduced as tools for getting something done with minimal or no theory, in contrast to most Haskell books that delve deeply into theory from the outset. For example, consider how several introductory Haskell books approach the very first topic: functions.
Haskell Programming from First Principles covers lambda calculus in detail in the very first chapter before even discussing how to install Haskell. Then, in the next chapter, it discusses what expressions are and explains that functions are a specific type of expression. Programming in Haskell starts with a very formal definition of a function: “In Haskell, a function is a mapping that takes one or more arguments and produces a single result, and is defined using an equation that gives a name for the function, a name for each of its arguments, and a body that specifies how the result can be calculated in terms of the arguments.” Thinking Functionally with Haskell is aimed at learning programming using Haskell, not the language itself, and starts the book by discussing how to represent mathematics in Haskell. In contrast, Get Programming with Haskell says that functions in Haskell work just like in mathematics. Instead of talking about expressions formally, it simply mentions that there’s no need to write an explicit return statement because all Haskell functions must return a value.
Another example is the discussion about monoids. Most Haskell books start with the definition: a monoid is a binary associative operation with an identity, followed by a discussion of what each term - binary, associative, operation, identity - means and how monoid works. Get Programming with Haskell, in contrast, leads in with the programmers’ needs to combine two similar things and introduces monoid as a tool to solve that problem.
The focus on pragmatism is also reflected in the order of topics. IO is introduced very early in the book, in lesson 21 on page 249 out of 42 lessons. And yes, that’s very early in Haskell. As the author says, “the most difficult part of learning (and teaching) Haskell is that you need to cover a fairly large number of topics before you can comfortably perform even basic I/O.” Haskell Programming from First Principles delves deep into IO in the final chapter 29 on page 1059. Thinking Functionally with Haskell covers it in chapter 10 on page 239 out of 12 chapters.
In addition, the book uses practical examples. While the first two capstone examples are trivial, the subsequent ones feel like something I might have to deal with at work. These include performing calculations on time series data, processing binary data from a library book record format called MARC records, writing a clone of LINQ from C# called HINQ for interacting with databases, and creating a Haskell project for finding prime numbers including some property-based tests. The very last lessons cover other real-world tasks such as error handling, making HTTP requests and handling responses, encoding and decoding JSON data, interacting with a sqlite3 database, and using mutable arrays to implement algorithms requiring in-place mutation.
Of course, the book is not perfect. Firstly, its scope is narrower than other books. While I really appreciate that the author introduced key practical concepts like interfacing with a database or using stateful arrays, which enable writing simple projects, you need more to write more complex projects. That includes a deeper understanding of topics that the author skimmed over, such as operator precedence or defining and using types and typeclasses, and topics that were not covered, such as reader monads or monad transformers.
Secondly, skipping theory prevents thoroughly understanding the concepts covered in the book. The book explains how to do things, but not why things work that way. I mentioned that Haskell Programming from First Principles covers lambda calculus in its very first chapter. I disagree with the choice to begin the book with it, but I also acknowledge that it provides a comprehensive mental model with which you can understand Haskell’s design and inner workings. Without a theoretical background, what you learn from Get Programming with Haskell can feel like a loose collection of recipes rather than an encompassing understanding of the language.
Thirdly, avoiding theoretical terms hampers further learning of Haskell. Do you remember the definition of monoids? Most books start with its definition because that’s actually how the Haskell community engages in discussion. Look at the documentation for the monoid typeclass. Few other languages would start by talking about associativity and identity. Even error messages will throw alien technical jargon at you, like skolem variable. Clearly, I’m not the only one bummed out by that error message. But unlike communities for other languages such as Elm, Rust, or even TypeScript, the Haskell community doesn’t have a community-wide drive toward making things easier to understand. So you have to eventually pick up those theoretical terms to go further in Haskell, even if you were spared from their onslaught in Get Programming with Haskell.
Despite its flaws, I still recommend this as the best introductory Haskell book for most software engineers. The book teaches you Haskell’s syntax and basic concepts in terms of software engineering that you are familiar with, in contrast to other books that teach Haskell in terms of theoretical computer science and mathematics that you may not have learned or may have left behind in your college days. Once you have crossed that bridge into Haskell land, you can more easily pick up theory through other books.
In the preface, the author states his goal: “I’ve always wanted to read a book that shows you how to solve practical problems that are often a real pain in Haskell. I don’t particularly care to see large, industrial-strength programs, but rather fun experiments that let you explore the world with this impressive programming language. I’ve also always wanted to read a Haskell book that’s reasonably short and that, when I’m finished, enables me to feel comfortable doing all sorts of fun weekend projects in Haskell.” I believe that the author has achieved his goal and more. If you want to learn Haskell and are the right target audience, then this is the book for you.
]]>Error
type.Result
type as its benefit is not worth the cost.Error handling in Typescript starts with the tools provided in Javascript.
/*
* Custom error types. Extending built-in Error class is great for
* interoperability, but works only after ES5.
*/
class CustomError extends Error {
constructor(errorCode, message) {
super(message);
this.name = this.constructor.name;
this.errorCode = errorCode
}
}
class InsufficientBalanceError extends CustomError {
constructor() {
super(13, "The account has insufficient balance to execute the transaction.")
}
}
class SuspendedAccountError extends CustomError {
constructor() {
super(21, "The account is suspended.")
}
}
class OffBusinessHoursError extends CustomError {
constructor() {
super(53, "The branch is off business hours and cannot execute the transaction.")
}
}
function checkBalance(account) {
if (!account.active) {
throw new SuspendedAccountError()
else {
} return account.balance
}
}
// Try... catch statement for failable operation and handling errors
function failableFunction(account) {
try {
// Do something that may throw error
const balance = checkBalance(account)
return balance
catch(e) {
} // Handle thrown error
console.error(e)
if (e instanceof InsufficientBalanceError) {
return e
else if (e instanceof SuspendedAccountError) {
} return e
else if (e instanceof OffBusinessHoursError) {
} return e
else {
} return undefined
}finally {
} // Do this regardless of whether there's an error or not
console.log("Hello, world!")
} }
Return type can be written down for better readibility. This also encourages programmers to pay more attention to error handling, especially if return types are inconsistent or convoluted like in the following example.
function failableFunction(account: string): number | InsufficientBalanceError | SuspendedAccountError | OffBusinessHoursError | undefined {
...
}
Union types can be used to group errors free of prototype chain.
type AccountError = InsufficientBalanceError | SuspendedAccountError
function failableFunction(account: string): number | AccountError | OffBusinessHoursError | undefined {
...
}
Custom type guards allow more precise handling of error types than Javascript’s type guards of typeof
and instanceof
typeof
checks for only the most basic types - boolean
, string
, bigint
, symbol
, undefined
, function
, number
, object
- and is not suited for handling error types
A instanceof B
simply checks if B.prototype
exists anywhere in the prototype chain of A
, which can result in unexpected behavior.
class AccountError extends CustomError {
constructor(message) {
super(10, message);
}
}
const e = new AccountError("test")
if (e instanceof CustomError) {
console.log("CustomError")
else if (e instanceof AccountError) {
} console.log("AccountError")
}
/*
* This outputs "CustomError".
* When using instanceof typeguard, you should keep track of
* the prototype chain and handle more specific errors first.
*/
Custom type guard can specify the exact error type you want to handle.
function isInsufficientBalanceError(o: unknown): o is InsufficientBalanceError {
return typeof o === "object" && o !== null && "name" in o && o.name === "InsufficientBalanceError"
}
Result
typeTypescript also enables adopting Result
type to handle errors. Result
, also often called Either
, is not built into Typescript. Defining the type itself is easy, but defining the API around it is quite a lot of work so I recommend using libraries. There are several, ranging from simple ones such as vultix/ts-result or badrap/result to full suites such as mobily/ts-belt or gcanti/fp-ts.
A basic definition and usage of Return
type is as following:
class Ok<T> {
constructor(private value: T) {}
}
class Err<E> {
constructor(private value: E) {}
}
type Result<T, E> = Ok<T> | Err<E>
class ParseError extends CustomError {
constructor(input: any) {
const message = `Could not parse the given input: ${input}`
super(message)
}
}
const SEASONS = ["spring", "summer", "fall", "winter"] as const
type Season = typeof SEASONS[number]
function isSeason(o: unknown): o is Season {
return typeof o === "string" && !!SEASONS.find((season) => o === season);
}
function parseSeason(s: string): Result<Season, ParseError> {
if (isSeason(s)) {
return new Ok(s as Season)
else {
} return new Err(new ParseError(s))
} }
Result
typeThis pattern requires all errors to be caught within the functions where they can occur. Otherwise the return types for both successful and failed operations cannot be correctly specified.
This pattern also leads to type-safe errors. Javascript can throw
anything, not just Error
type - this is why caught errors have unknown
type in Typescript.
All failable operations can be represented as a single unified abstraction, improving code readability and composability. A chain of failable functions can quickly grow out of hand.
function stepOne(): string | undefined {
...
}
function stepTwo(s: string): number | StepTwoError {
...
}
function stepThree(n: number): string | StepThreeError {
...
}
function operation() {
const stepOneResult = stepOne()
if (stepOneResult !== undefined) {
const stepTwoResult = stepTwo(stepOneResult)
if (typeof stepTwoResult === "number") {
const stepThreeResult = stepThree(stepTwoResult)
return stepThreeResult
else {
} ...
} else {
} ...
} }
Result
type has an established pattern of API that makes such operation much easier. Specific implementation may differ among libraries, but it generally looks like this.
function stepOne(): Result<string | undefined> {
...
}
function stepTwo(s: string): Result<number | StepTwoError> {
...
}
function stepThree(n: number): Result<string | StepThreeError> {
...
}
function operation() {
const result = stepOne().andThen(stepTwo).andThen(stepThree)
return result
}
Result
typeResult
type is tiring, especially when the pattern is almost never supported by the broader Javascript ecosystem.Result
type still cannot guarantee the absence of runtime error at compile time. If you forget to handle a potential error, Typescript won’t remind you of it since throw
is not represented in Typescript’s type system. For example, JSON.parse
can throw
a SyntaxError
, but its type signature is just JSON.parse(text: string, reviver?: ((this: any, key: string, value: any) => any)
. Unless you remember to wrap it in Result
, the program will still crash at runtime.Error handling in Typescript is better than Javascript’s but still is not really great. Here’s my conclusion as of now.
Define and use custom error types by extending the built-in Error
type.
This is simply a standard practice. Typescript’s union type allows a very flexible definition of error types which is pleasant to use.
Always handle errors within the functions where they occur, and return errors as values to let the Typescript compiler handle them.
All Javascript errors are runtime errors, which are difficult to catch and reason about. Typescript allows programmers to manually turn them into compile time errors, which should be taken advantage of as much as possible. Unfortunately, this means that the programmer’s skill and understanding of the domain will remain the deciding factor of the program’s robustness.
Do not introduce Result
type as its benefit is not worth the cost.
I loved using Result
in Elm and Haskell, and missed it when working with Typescript. Trying it out in Typescript, however, was an unpleasant experience. The Typescript ecosystem is not compatible with it, and you have to constantly fight against it to make Result
work. And I believe that if you’re fighting against the environment, it’s a losing game. Unless the language itself starts natively supporting the Result
type, I won’t be using it.
Unfortunately, property-based tests are much harder to write than unit tests. Writing tests for a property of the program requires that you understand the said property, and express it without using the implementation of the function being tested. Sometimes it even feels like solving a brain teaser. Here’s one of the most commonly given out examples: how can I write a test that a function that reverses a list works correctly? The answer: reversing a list twice should return the original list. Just like brain teasers, writing property-based test becomes easier the more examples you see.
So I’d like to share some property-based tests I wrote for my big number library for Elm language. It was the perfect fit for property-based tests, as there are existing mathematical properties I can test for, and unit tests, even dozens or hundreds of them, don’t provide enough correctness for this kind of library.
decDecimalFuzzer : Fuzzer Decimal
decDecimalFuzzer =
let
int =
Fuzz.intRange 1 5
|> Fuzz.andThen (\i -> List.repeat i (Fuzz.uniformInt Random.maxInt) |> Fuzz.sequence)
|> Fuzz.map (List.map String.fromInt >> List.foldl (++) "")
fraction =
Fuzz.intRange 0 3
|> Fuzz.andThen (\i -> List.repeat i (Fuzz.uniformInt Random.maxInt) |> Fuzz.sequence)
|> Fuzz.map (List.map String.fromInt >> List.foldl (++) "")
sign =
Fuzz.oneOfValues [ "", "-" ]
in
Fuzz.map2 (++) (Fuzz.constant ".") fraction
|> Fuzz.map2 (++) int
|> Fuzz.map2 (++) sign
|> Fuzz.map (Decimal.fromString >> Maybe.withDefault (Decimal.fromInt 0))
Here’s the custom generator for a big decimal. It generates string
representations of big decimal numbers, then turn them into the Decimal
type
that the library uses. For the string representation, it follows this process:
Decimal
value.describe "negate"
Test.fuzz fuzzer "should return original i when applied twice" <|
[ i ->
\let
i_ =
Integer.negate << Integer.negate <| i
in
Expect.equal i i_
]
This is the test for negate function. Just like reversing a list, it relies on the fact that negating a number twice should return the original number.
describe "fromString and toString"
Test.fuzz fuzzer "should be inverse functions" <|
[ i -> Expect.equal (Integer.fromString (Integer.toString i)) (Just i)
\ ]
This is the test for fromString and toString function. It relies on the fact that these two functions are inverse of each other.
describe "add"
Test.fuzz2 fuzzer fuzzer "should have transitivity property" <|
[ i1 i2 ->
\Expect.equal (Integer.add i1 i2) (Integer.add i2 i1)
, Test.fuzz3 fuzzer fuzzer fuzzer "should have associativity property" <|
i1 i2 i3 ->
\Expect.equal (Integer.add (Integer.add i1 i2) i3) (Integer.add i1 (Integer.add i2 i3))
, Test.fuzz fuzzer "should have identity property" <|
i ->
\Expect.equal (Integer.add Integer.zero i) i
, Test.fuzz fuzzer "should return zero for addition with negative self" <|
i ->
\Expect.equal (Integer.add i (Integer.negate i)) Integer.zero
, Test.fuzz2 Fuzz.int Fuzz.int "should have same result for addition as Int" <|
i1 i2 ->
\Expect.equal (Integer.add (Integer.fromInt i1) (Integer.fromInt i2)) (Integer.fromInt (i1 + i2))
]
This is the test for add
function. It tests associativity, transitivity, and
identity properties of numbers. It also tests that adding self with negative
self returns zero. Note that the last test relies on Elm’s Basic
library to
test that addition produces correct result at least for Integer
values within
Javascript’s integer range.
describe "mul"
fuzz2 fuzzer fuzzer "should have transitivity property" <|
[ d1 d2 ->
\withinTolerance (Decimal.mul d1 d2) (Decimal.mul d2 d1)
, fuzz3 fuzzer fuzzer fuzzer "should have associativity property" <|
d1 d2 d3 ->
\withinTolerance
Decimal.mulToMinE (Decimal.minExponent * 2) (Decimal.mulToMinE (Decimal.minExponent * 2) d1 d2) d3)
(Decimal.mulToMinE (Decimal.minExponent * 2) d1 (Decimal.mulToMinE (Decimal.minExponent * 2) d2 d3))
(, fuzz fuzzer "should have identity property" <|
d ->
\withinTolerance (Decimal.mul (Decimal.fromFloat 1) d) d
, fuzz3 fuzzer fuzzer fuzzer "should have distributive property" <|
d1 d2 d3 ->
\withinTolerance
Decimal.mulToMinE (Decimal.minExponent * 2) d1 (Decimal.add d2 d3))
(Decimal.add (Decimal.mulToMinE (Decimal.minExponent * 2) d1 d2) (Decimal.mulToMinE (Decimal.minExponent * 2) d1 d3))
( ]
This is the test for mul
function for Decimal
type. It follows similar
pattern by using inverse functions, but now the stake is higher than Integer
because Decimal
is a much more complex data type. I would have been fine, if
slightly uneasy, about correctness of Integer
type with unit tests. I’m
familiar with the usual suspects for that type - 0, NaN, positive and negative
infinity, sufficiently long numbers, and so on.
Decimal
is a different beast. I was not familiar with edge cases for
Decimal
at the time of writing the library, and there were simply too many
combinations that I would have to test to be even moderately confident with the
correctness of the program. And property-based tests helped me find several
bugs for mul
, div
, and sqrt
functions that I would have been unable to
discover with unit tests.
And that’s it for the examples. You can look at more tests in my library, but there isn’t any new technique there. I hope this post was helpful in getting started with writing property-based tests.
]]>As a product manager, your entire days will be spent attending meetings, talking to customers, and writing documents - but then when do you get actual work done? Well, they are your actual, most important tasks as a product manager. Changing your perspective can be difficult - it certainly was to me. But you should do that and adjust to the new role. Otherwise, you will get unnecessarily stressed about your lack of productivity, when in fact you’re getting plenty done already.
But what’s there to communicate so much?
Well, let’s get back to the beginning. A product manager’s job is to [ensure that what gets built is both valuable and viable]https://svpg.com/product-management-start-here. To ensure something is valuable, you have to talk to your customers or consult with researchers and marketers who handle that for you. To ensure it’s viable, you have to talk with designers, engineers, and accounting to review available resources. And to ensure it gets built, you have to coordinate all the involved parties to nail down details, smooth out misunderstandings, handle unexpected troubles, and oversee the execution.
All of them are done by communicating. So communication is the primary tool of a product manager for getting their job done, as much as writing code is to a software engineer. A lot of product managers consider that communication is their most important responsibility and estimate that it takes up more than half of their work hours. Based on my limited experience, they are not wrong.
A lot of job descriptions for product managers are really vague, because the role has inherently broad scope and constantly keeps changing even within the same company. This means that you must research exactly what you’d be responsible for and what you are empowered to do in that role.
For example, the job description may say that you are responsible for writing specifications and managing schedules to ensure timely release of features. But how far are you expected or allowed to go to get it done?
Often there are unspoken yet expected behaviors in each company. What if a key engineer leaves the company and the delay is inevitable? Are you expected to manage and persuade them from leaving the team? Or should you have filled that hole by hiring a contractor or borrowing an engineer from another team? What if the leadership does not provide a key strategic guidance? Should you bypass the leadership to go through with the current plan, or wait for the directive?
There are countless scenarios like that, and you should explore them as much as possible to get a picture for what’s required of and allowed to you. Try not to get trapped in impossible situations. If the company demands a lot of responsibility, make sure you are sufficiently empowered. Otherwise you will not be happy with the position.
There’re different jobs under the title of product manager. While there are common mindset required across all specializations, each specialization requires not only different skills but also different mindset.
This Reforge article is a great article on product manager specializations. I highly recommend it to anyone thinking of working as one or hiring one.
The article describes four types of work: feature, growth, scaling, and product-market fit (PMF) expansion. Feature work incrementally extends a product’s functionality. Growth work captures more of the existing market by better connecting customers to the existing value of a product. Scaling work keeps the product team’s ability to quickly deliver results as the product grows. PMF expansion work expands the product into adjacent market or product.
The article covers required skills and career path very well, so I’d like to talk about personality fit. As a product manager, you should like analyzing data, empathizing with customers, experimenting for solutions, communicating with other teams, and thinking and planning strategically. But you’re bound to love something more than others.
For example, I love analyzing problems and iterating on solutions, like working with others and strategic thinking, and am okay with exploring new business opportunities. So it shouldn’t be surprising that I enjoyed feature and scaling works, but struggled with growth and PMF expansion work. I was less proficient in growth and PMF expansion works so skill levels played a role there. But I also picked up those skills much slower, because I didn’t find them fun.
If you come from engineering background like me, then you’re likely to have similar experience. So I recommend starting with those works. If you have design background, you are likely to enjoy feature and growth works; with marketing background, it’d be growth and PMF expansion works; with business background, probably feature and PMF expansion works.
Just make sure to know that there are different branches of product management, and try out the one you like the most and fits you well.
]]>Among all the articles about PM skillset, Ravi Mehta’s take is my favorite. He divides the skillset into four areas and twelve skills. I recommend reading the full article here for the description of each skill and which skill is important for each career step of a PM.
Here are my expectations when I first took the job: I knew that I would quickly pick up skills under product execution and customer insight categories. My experience as a software engineer gave me insight into the former, while my education in psychology and human-computer interaction gave some foundation for the latter. I was less confident about the influencing people part, but hoped that the communication skills I gained over working as a translator and interpreter would help. Product strategy was my weakness and I wanted to get better at it.
After a year and a half, I have gained solid skills in product execution and customer insight. Influencing people is harder to evaluate, but I can communicate with accuracy and trust, although I’m still working on communicating more frequently and inspiring people. Product strategy is a bit complicated. What I initially thought as product strategy could be represented as something that spans an entire spectrum from product strategy to business strategy.
Marty Cagan talks about it in his blog post: > “Business strategy is about identifying your business objectives and deciding where to invest to best achieve those objectives. For example, moving from a direct sales model (your own sales force selling directly to customers) to an online sales model (your customers buy from your site) is a business strategy. Deciding whether to charge for your services with subscriptions or transactions fees or whether you have an advertising-based revenue model is a business strategy. Deciding to move into an adjacent market is a business strategy. > > Now, clearly there are some big product implications to each of these business strategies. But they are not one in the same. There are lots of ways to sell online, lots of ways to monetize value, and lots of ways to develop or acquire and integrate an adjacent offering. The product strategy speaks to how you hope to deliver on the business strategy. > > Moreover, while the business may believe something is a great business opportunity, you don’t yet know if your company can successfully deliver on this opportunity. Maybe it will cost too much to build. Maybe customers won’t value it enough to pay for it. Maybe it’ll be too complicated for users to deal with. This is where product strategy and especially product discovery come into play.”
I’ve definitely gotten better with product strategy: prioritizing features based on business impact in support of business strategy, setting coherent monthly and quarterly plans to attain business objectives, and delivering business outcomes. I know enough to know what I don’t know, so I would consider myself a journeyman in this area. But that’s only half of the skillset I wanted to acquire.
Business strategy works on a level higher than that of product strategy. Borrowing Marty Cagan’s wording, to set business objectives, I have to understand the market and business trends better; to decide where to invest, I need to get better at operations and capital management. These are the other half of the skillset I wanted. And I’d like to focus on improving them, since they are also essential to running a successful business.
The problem is that while I can still grow as a PM in my current job, it’s an incremental addition to my business capabilities. To grow exponentially, I need to learn those other skills. There are multiple paths to gain them, but obviously the quickest way to learn business is by running one.
But I’m not going to quit my job right now and jump into it with little planning - as I’ve said, I’ve done it before and failed. Instead, my plan is to do a series of minimum viable testing while working in my current job. At least I would gain the skills I want, and they could become a side business or a full business eventually.
As for industries, I’m looking into NFT, blockchain in general, and VR. Those markets are still new, yet there’s enough money in them. I also love that these areas have lots of people with aspirations to change the world, and they are the type of people I’d like to work with. My goal is to run at least one test before the end of this year. I’ll probably share how my tests went on this blog, so you can read about it later and see how it went.
]]>Why? I wanted more agency in creating a valuable product, more than what a software developer could have. In my previous job, the product I worked on got nowhere. Our first product manager left soon after launching the product, having burnt out. The next product manager also did not stay for long. Then came months-long anarchy. Without a product manager, our product just drifted. Everyone on the team worked hard to save it. Designers improved the usability, developers improved the software performance, and marketers launched new marketing campaigns. But they were of no avail. Users left, team morale tanked, and team members started leaving the company.
In the end, when the ship lacks a destination, it doesn’t matter how hard I row. I got the product manager role after talking with the leadership. Unfortunately the company’s finance had worsened so much in the meantime that I had to leave it four months after taking the role, without finishing the project I was leading. After that I landed a new job as a product manager, where I’ve been working for about a year and a half now. It has been an interesting ride since then.
I’ve actually been trying to write about product manager experience for a few months, but it was a bit harder than writing tech posts. For tech posts, I’d figured out my favorite way to write them: a problem, a solution, and supporting facts and arguments. It was straightforward. But I’m still figuring out how to write product posts. Who would be my audience? How would I structure it? How long would it be? I’ve been stuck there for months, but I finally decided to screw it all and just publish the posts a la Amy Hoy. The posts will be rough around the edges, but they will be more frequent. My ambition is to publish a post every week. So stay tuned!
]]>GitHub Pages
gem, which included Jekyll and other supporting libraries. The syntax highlighting library rouge
, however, was locked to version 2.2.1 that did not support Elm. Forcefully upgrading rouge
might have resolved the issue, but then I thought, why not get rid of the GitHub Pages
gem and upgrade all the dependencies? So began the yak shaving.
But then, why not fix that inconsistent CSS that had been bothering me? But then, why not try out a different type of static site generator? But then, why not experiment with a language that I have been learning recently? Fortunately Hakyll, a static site generator in Haskell, seemed to be quite stable and have most of the features I needed. So I decided to rewrite my blog in Hakyll. At this point I fully realized that I was in a long yak shaving trip, but I just shrugged and started the journey.
Hakyll is the first Haskell library that I’ve used extensively, and it felt surprisingly difficult to pick up. When I first try out new libraries in other languages, I first skim through the documentation to get the general sense of it. Then I try running small bits of codes, observing the results to check whether my understanding of the libraries’ behaviors are correct. I’m quite sure that this is a typical approach.
The problem was that I couldn’t use this approach with Haskell. When I tried to run codes from Hakyll after reading its documentations, the compiler simply told me that the types did not match. Such feedback was of little use to me as it wasn’t telling me anything about whether the codes I wrote were correct or not. To be fair, the compiler was explaining to me that I did something wrong, but it was explaining my mistakes in terms of Hakyll types, the very same things that I was trying to understand in the first place so it wasn’t very helpful. So I had to rely on documentations and source codes to understand what those Hakyll types meant and how they worked, which felt similar to trying to understand new mathematical concepts by just reading the theorems and their proofs without reaching a pen.
Another issue was that I had to understand a major portion of the library before I could use it even a bit. Hakyll provides hakyll :: Rules a -> IO ()
as the main interface with IO. So I need Rules a
type. According to the documentation, it represents the different rules used to run the compilers
. A Rules a
is created using functions like match :: Pattern -> Rules () -> Rules ()
, route :: Routes -> Rules ()
, and compile :: (Binary a, Typeable a, Writable a) => Compiler (Item a) -> Rules ()
. Now I had to understand what Pattern
, Routes
, Comipler a
, Item a
types represented and how to use them, and this journey down the rabbit hole continued on. Such a journey is quite common in software, but Haskell compounded the difficulty by effectively forcing me to complete the whole journey in one go, as I couldn’t use trial and error to understand the types bit by bit.
Fortunately, Hakyll’s author provides plenty of examples through tutorials and example implementations, which were immensely helpful as I tried to understand what was going on in Hakyll. Without them, I wonder how much longer it would have taken me to complete this journey. If you plan to write Haskell libraries, please provide plenty of example codes. Haskell’s strict compiler makes it inherently more difficult to experiment with the libraries, especially to beginners like me, so every little bit to mitigate that hurdle helps a lot.
Just like in my previous blog, internationalization was the most painful feature to implement. Hakyll does not provide this functionality, so I had to implement it myself. I could google an existing implementation but I did not like it. In that implementation, The source markdown files had texts for both languages preceded by language codes, and texts in unused language were removed with Unix sed
utility. The markdown files had texts like this:
Fr: ## Bienvenue
En: ## Welcome
Fr: Bienvenue sur mon site.
En: Welcome on my website.
It felt wrong to have texts in multiple languages intermixed like that. Not only did it look weird, but also would have conflicted much with my writing process. So I decided for another structure where posts would be kept in separate files for each language under a folder. For example, the markdown files for this post are in posts/2019-03-15-from-jekyll-to-hakyll/en.md
and posts/2019-03-15-from-jekyll-to-hakyll/ko.md
. Having Hakyll find and transform these files into appropriate HTML files was straightforward.
The problem was that posts needed links to versions in other languages. In order to create the links, Hakyll had to know what other files existed in the same folder as the file that was getting built. This is an IO operation that must be represented inside the IO type in Haskell, so I listed all post directories and files within the main :: IO ()
function, then used them to build posts.
Another problem was actually creating the links in the templates. I had to iterate over the available language versions, which would have been simple if I could pass to Hakyll template a custom tuple or record that contained link texts and link urls. But Hakyll template only accepted Item a
type that represented some content and an identifier. I created a list of Item String
that had “en” or “ko” as its identifiers and empty strings as their bodies, as I only needed those identifiers, then pass the list to template to build the appropriate links. It’s not necessarily a misuse of Item a
type, but it felt like a unnecessarily convoluted way to create links.
I had first learned of the concept of vertical rhythm while reading a post on it. I loved how the texts looked in his posts, so I asked his permission and copied the author’s CSS in the previous version of my blog. At the time I wasn’t familiar with CSS so I had copied the entire CSS file. I got more familiar with CSS in the meantime, so decided to get a better understanding of the concept and reimplement it myself. This post was very informing, and I followed the advice in there to create a rudimentary vertical rhythm. The result was not as elegant as Sylvain’s, but it was something I could understand and maintain well enough. Moreover, I also created a Rem-based layout inspired by another blog.
I also decided to move my hosting from Github Pages to Netlify. The main reason was that Netlify supported redirection rules, allowing me to check the users’ HTTP headers and send them to /ko/index.html
or /en/index.html
accordingly. Github Pages did not provide this feature, so I had to provide English content in index.html
and Korean content in a separate index_ko.html
file. This inconsistency has been bothering me from the beginning. Netlify also provides most of Github Pages’ strengths, such as automatically building and deploying sites upon new push to source code repositories, while supporting a lot more features so I highly recommend it over Github Pages.
After I finished rewriting the blog, I realized that I still did not have Elm syntax highlighter. To convert markdown files to HTML, Hakyll uses a Haskell library called Pandoc, which in turn uses a library called Skylighting for syntax highlighting functionality. And this Skylighting library was missing the syntax definition for Elm. So after all that adventure through Haskell types and new CSS techniques, I was back at where I’ve begun my yak shaving journey. I did feel a bit dismayed, but I decided to take one last step and wrote down the missing syntax definition, which is now waiting for review in both Skylighting and KDE’s syntax highlighting library. In the meantime, I’ve configured my stack.yml
to use my fork of Skylighting. That finally concluded the rewriting.
The rewrite took way longer than I’ve expected. If someone asked me whether I would recommend Hakyll as a static site generator for beginners, I wouldn’t. I think Haskell, because of its learning curves, is an overkill for a static site generator. Go or Javascript ones would have been easier and faster. On the other hand, this was a great opportunity for me to learn and write an actual Haskell program. So if you’re already familiar with static site generators, and if practicing Haskell is your goal, then I would definitely recommend Hakyll as the library has straightforward abstractions and plenty of examples and tutorials online.
]]>update
function is responsible for changing Model
state. Depending on how you structure your Model
, Msg
, and update
, sometimes you may want to call update
function again with another Msg
after calling update
function. Recursively calling update
is straightforward.
type Msg
= FirstMsg
| SecondMsg
update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
case msg of
FirstMsg ->
update SecondMsg model
SecondMsg ->
model, Cmd.none ) (
But sometimes you may want to trigger multiple Msg
s. Task
can be used here.
type Msg
= FirstMsg
| SecondMsg ()
| ThirdMsg ()
update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
case msg of
FirstMsg ->
let
cmd =
Cmd.batch
Task.perform SecondMsg (Task.succeed ())
[ , Task.perform ThirdMsg (Task.succeed ())
]in
model, cmd )
(
SecondMsg ->
model, Cmd.none )
(
ThirdMsg ->
model, Cmd.none ) (
The caveat is that there’s no guaranteed ordering between SecondMsg
and ThirdMsg
, and that these subsequent Msg
s require arguments. Most importantly, I think recursively calling update
is a bad practice. Use this approach only when there’s absolutely no other way.