Do Things Right with TypeScript

A collection of pointers for the present and future.

Print the whole error

`tsc –noErrorTruncation`

Why tsconfig.json is hard

tsconfig.json is important, because the TS compiler does way more, way more flexibly, than the Java compiler does. It’s both a transpiler and a typechecker.

What kind of JS do you want outputted? Choose your level of ECMAScript compatibility (as ancient as ES3 or as modern as ESnext) and also your module system compatibility (commonjs, amd, or several more).

What will be magically be available in your runtime? Bring in the type declarations for these things (such as the DOM) with the `”lib”: [“DOM”]` compiler option, or in `”types”: [“@types/node”]` (node module globals like `__filename`, or node built-ins like `fs`).

Also choose how stringent the typechecking is, with “strict” and its various suboptions.

Choose where your input files are, and where your output files go.

Choose what to output: only JS? sourcemaps? type declarations? type declaration maps?  … and for those maps, choose relative paths to the source.

The good news is: even if compilation has errors, tsc will output JS. So you can test even when tricky compile errors that you can’t figure out plague you.

Iterate through objects and arrays

There exists both `for (const a of array)` and a similar construct which shall not be named but contains the work `in` instead of `of`. Do not use that one.

To iterate through an array: for (const a of array) { … }

… ok I can’t stand it anymore I’m moving this post to Medium. Google clearly does not care about this platform, it may go away like Google Reader, and it is super painful to put code here.

https://medium.com/@jessitron/do-things-right-with-typescript-7a3ad7371387

Do Things Right with npm install

Lately I’ve been wrestling with npm. Here are some rules I’ve learned:

Use `npm ci` rather than `npm install`

`npm ci` will bring down exactly the dependencies specified in package-lock.json. `npm install` does more than that; it also tries to update some libraries to a more recent version. Sometimes it updates URLs or nonsense in package.json so that it my `git status` is dirty. Sometimes it does deduping. Sometimes it sticks the version you have lying around. I haven’t figured it out. It seems to be pretty dependent on the current circumstances on my file system.

Now I only use `npm install` if I specifically want to change the dependencies in my filesystem.

Use `npm install –save-exact`

Especially for snapshots. Semver does not work for snapshots or branches or anything but releases. And npm only works with semver. If you are not using a release; if you publish with build tags or branch tags or anything like that; do not give npm any sort of flexibility. It will not work. Specify a precise version or else it will give you nasty surprises, like deciding some alphabetically-later branch is better than the master-branch version you specified.

Use `npm view` to check the status of a library

This is quite useful. Try `npm view ` and it brings to your command line the info you can get from the npm website. You can ask it for specific fields. To get the latest version of chalk:

$ npm view chalk dist-tags.latest
2.4.1

If you want to do anything programmatic with this info, the “do things right” flag for `npm view` is `–json`.

Try `npm ls` but then dig around on the filesystem

Exploring the dependency tree, `npm ls` is really cool; it shows it to you. You can see where you’re getting a specific library with `npm ls ` except that it doesn’t always work. In the end, I dig around in my node_modules directory, using `find -name .` to look for the real thing.

Other times I use my little node-dependency dungeon explorer game to see what version of stuff is where. 

These are just a few of the nasty surprises I’ve found moving from Java to TypeScript, from maven dependencies to npm. Dependency management is an unsolved problem, and the people working on npm have made huge improvements in the last few years. I look forward to more.

Understanding Promises in JavaScript (with yarn and Legos)

TL;DR: creating a Promise splits off another path of execution, and the Promise object represents the end of that path. Calling .then adds code to the end of that path.

You can think of your program’s execution as following a piece of yarn. this video illustrates the difference between a synchronous program and the same program using Promises:

Promises let you by explicit about what needs to happen after what, while giving you more flexibility than “each of these things happens one at a time in this order” (the default flow of a simple synchronous program).

The negative is that when you want to specify “do this after that Promise,” you have to package up that code and pass it to .then() . The Promise object holds the end of the yarn representing its path of execution; .then() ties more code onto the end and returns the new end.

See this in the readConfig function, which reads a file and parses its contents. The synchronous version executes on the program’s usual path of execution: readFileSync retrieves some bits, and then JSON.parse turns them into a useful object.

synchronous: one piece of yarn proceeds straight down the code.

In the version with promises, readConfig returns immediately, but what it returns is the end of a piece of string. It’s a piece of string that includes readFile, which fetches some bits; tied on by .then() is JSON.parse, which turns those bits into a useful object.

promises: the end of a string is returned, an orange piece tied on to a dark blue piece

The useful object will be available at the end of the orange string to whatever code gets tied on to it later.

Promises beat callbacks in this respect: when you start up the asynchronous task, you don’t have to provide alllll the code that needs to execute after it. You can add more later, as long as you keep hold of the end of the string.

Don’t lose the end of the string! If you don’t need it to add any more code, tie the string off neatly with .catch() — otherwise an error might come out of a stray end and mess up your program. (I could do another video on that.)

Promises don’t beat callbacks in that you still have to wrap subsequent code up into a function. It gets messy when you have .then() calls within .then() calls. But wait! Don’t get discouraged!

In TypeScript and ES2018?, we can write asynchronous code in the same simple format using async and await. While the code looks almost the same as the synchronous version, the paths of execution are more like the Promises one.

async function: just as many pieces of yarn as the Promises, with no calls to .then()

The async function returns immediately — don’t be fooled by that return statement way at the end. It splits off a path of execution, which does work (here, reading the file) until it hits theawait keyword. The rest of the code (parsing) becomes another piece of string. await ties the strings together just like .then() (except way prettier). At the end of an async function is a return statement, which supplies the value that will come out the end of the string. Anasync function always returns a Promise.

Promises give you more control, so they give you more to think about. This means they’ll always be more complicated than synchronous code. With async and await we get both control and clarity: what Avdi calls “straight line code that just thunks itself onto the work queue whenever it gets stuck.” Don’t fear Promises, do use TypeScript, and do keep hold of the ends of your strings.

Reference: Typescript asynchronous sleep that returns a promise

I want some code to execute after a delay. I want to do this with promises, in TypeScript, asynchronously. Apparently this is hard. Here is the spell:

const sleepPlease: (number) => Promise<void> = 
    promisify( (a, b) => setTimeout(b, a));

const slow: Promise<string> = 
    sleepPlease(500).then(() => “yay finally”);

I imported promisify from “util”. setTimeout is built in, but its arguments are in the wrong order to naturally pass to promisify

Dictionary Objects in JavaScript and TypeScript

TL;DR: when using an object as a dictionary in TypeScript/ES6, iterate through it using `Object.keys()`.

Coming from statically typed languages, I keep looking for a Map or Dict type in TypeScript (or JavaScript). People use objects for this, though. Objects have key-value pairs in them, and you can add them and delete them and declare them statically and it looks nice, especially in literals.

const dictionary = { 
   impute: “attribute to”,
   weft: “the holding-still strings on a loom”,
}

But using a JavaScript object as a dictionary has its quirks.

There are three ways to iterate through the values in the object. (This is TypeScript syntax.)

Flip through the enumerable keys defined on an object:

for (const key of Object.keys(dictionary)) {
   const valuedictionary[key]
   console.log(`${key} -> ${value}`)
}

This one makes the most sense to me; it’s what I expect from a dictionary. It flips through the values in an array of the keys defined on that object directly, no prototypal inheritance considered. This is how JSON.stringify() prints your object.

Flip through the enumerable keys defined on that object and its prototype chain:

for (const key in dictionary{
   const value = dictionary[key]
   console.log(`${key} -> ${value}`)
}

This is the easiest one to write. It flips through the keys defined on the object and its prototype chain. If you’re using an ordinary object for this, and no one has done anything bizarre like add an enumerable property to Object, it’s fine. tslint hates it though; it bugs me about “for(…in…) statements must be filtered with an if statement.” tslint is like “OMG, you do not know what is on that thing’s prototype chain it could be ANYTHING”

I find it backwards that for(…in…) flips through property names of an object, while for(…of…) flips through values in an array. This confuses me daily in TypeScript. If you accidentally use for(…of…) instead of for(…in…) on an object, then you’ll see 0 iterations of your loop. Very sneaky. If you accidentally use for(…in…) on an array, you get the indices instead of the values, also confusing. TypeScript and tslint don’t warn about either error, as far as I can tell. 😦

Flip through the enumerable and non-enumerable keys defined on that object:

for (const key of Object.getOwnPropertyNames(dictionary)) {
   const value = dictionary[key]
   console.log(`${key} -> ${value}`)
}

This one flips through only keys on the object, not its prototype chain, and also gives you the names of non-enumerable properties. You probably don’t want those.

What are non-enumerable properties?

Conceptually, they’re properties that don’t make sense to flip through, that we don’t want JSON.stringify() to look at. They’re hidden from for(…in…) iteration and from Object.keys(). You can still access them on the object. For instance, constructors of TypeScript classes are non-enumerable properties. Methods on built-in types like Array and object are non-enumerable. They stay out of the way.

When would we want to flip through them, like in Object.getOwnPropertyNames()?
I don’t know, maybe for debugging.

Why make a non-enumerable property?
I hit a use case for this today: serializing an instance of a class with recursive fields. JSON.stringify() can’t print recursive structures.

Side quest: Making recursive objects printable

In TypeScript, every field in a class instance (including any inherited field) is an enumerable property, returned from Object.keys() and printed by JSON.stringify(). See this TreeNode class that tracks its children, and its children track it:

class TreeNode {


private _parent: TreeNode;
public children: TreeNode[] = [];


public constructor(public readonly value: string) {
}
}

Printing an instance of TreeNode gives me: `TypeError: Converting circular structure to JSON` waah 😦

Here’s a tricky way to say “Hey JavaScript, don’t print that _parent field”. Explicitly override its enumerable-ness in the constructor.


class
TreeNode {

private _parent: TreeNode;
public children: TreeNode[] = [];

public constructor(public readonly value: string) {
Object.defineProperty(this, “_parent”, { enumerable: false });
}
}

We can get tricky with TypeScript class fields. After all, they get tricky with us.

Properties on a class instance

In TypeScript, class instance fields show up in Object.keys() and accessor-based properties (like with getters and setters) do not. They’re properties on the prototype, which is the class itself. So if you want to see accessor properties, use for(…in…) on class instances. That gets the enumerable properties from the prototype chain. Watch out: these include methods.

Why iterate through the properties on a class? I don’t know, maybe for debugging again. If you do it, I suggest skipping methods. This makes tslint happy because its an if statement:

for (const propertyName in classInstance) {
  if (typeof classInstance[propertyName] !== “function”) {
    console.log(`${propertyName}=${classInstance[propertyName]}`);
  }
}

Recommendations

If you have a class instance, access its properties with dot notation, like treeNode.children. That way TypeScript can help you avoid mistakes. If you have a dictionary object, access its properties with index notation, like dictionary[“impute”] (and turn off the angry tslint rule). Class instances have specific types; dictionary objects are type object. Access the contents of a dictionary using Object.keys().

a Rug story: adding test cases

These days I work on Rug, Atomist’s library for coding code modifications.

Adding a feature, I start by creating a test. While it’s tempting to create a narrow test around the piece of code I want to change, it’s better to create an API-level test. Testing at the outside has a few benefits: it tells the story of why this feature is needed; it drives pleasing API design; and it places minimum constraints on the implementation. The cost is, it’s more work.

The API level of Rug is in TypeScript, where people write programs to modify other programs. The test compiles the TypeScript to JavaScript, and rug executes that inside the JVM, where our Scala code does the tricky work of implementing the Rug programming model — navigating the project, parsing code, and making atomic modifications (it all works or none of it is saved). This means that my API-level tests include a TypeScript file and a Scala file, plus a bunch of wiring to hook them together. I get tired of remembering how to do this. Plus, we’re constantly improving the programming model in TypeScript, so “the right way” is a shifting target.

Last year, I would have copied an existing test (which one is up to date? I don’t know! guess and hope it works), modified parts of it for my needs (and forgotten some), embedded the TypeScript code as a string in Scala (seems easier than making a .ts file), and tried to abstract away some of the repetitive bits that are shared between tests (even though that obscures the storyline of the test).

This year, I have a new tool. About the third time I needed a new test, I wrote a program to create it for me. I wrote a Rug! My AddTypeScriptTest Rug editor creates a new TypeScript file in test/resources, and a new Scala file in test/scala. It bases these off of sample files that exemplify the current standard in Rugs and their tests, performing all the modifications that I mess up in the copy-paste-modify strategy.

me:

rug edit -l AddTypeScriptTest class_under_test=com.atomist.rug.NewFeature

my Rug program:

  • copies SampleTypeScriptTest.scala to a new location. Changes the package name, the class name, and the location of the TypeScript file it will load.
  • copies SampleTypeScriptTest.ts to a new location. Changes the name of the class and the exported instance.

SampleTypeScriptTest.scala and SampleTypeScriptTest.ts form a real test in rug’s test suite, so I know that my baseline continues to work. When I update the style of them (as I did today), I can run the sample test to be sure it works (caught two errors today). I maximize their design to best tell the story of how rug goes from a TypeScript file to a Rug archive to running that program on a separate project and seeing the results. This helps people spinning up on Rug understand it. Repetition (of the Scala package name and the path to the test program, for instance) doesn’t hurt because a program is modifying them consistently (bonus: IntelliJ will ctrl-click into the referenced file on the classpath. It didn’t when that repetition was abstracted). If I want to change the way all these tests work, I can do that with a Rug editor too, since they’re consistent. Ahhhh the consistency: when a test breaks, and it looks exactly like the other tests except for meaningful differences, debugging is easier.

I created this Rug editor inside the rug project itself, since it’s only relevant to this particular project. Then I run the rug CLI in local mode, on the local project, and poof. I’ve used rug to modify rug using a Rug inside rug. Super meta! (It doesn’t have to be so incestuous. Other days, I use rug to modify any project using a Rug in any Rug archive.)

If you want to create a Rug to automate your own frequent tasks, install the Rug CLI and, from your project root, use this Rug: rug edit atomist-rugs:rug-editors:AddLocalEditor editorName=WhatDoYouWantToCallIt . Find your starting point in .atomist/editors/WhatDoYouWantToCallIt.ts

Pop into Atomist community slack with questions and we will be soooo happy to help you.