This section of the site is dedicated to all things I find interesting, including (but not limited to) computing, computer science, programming languages, I try to categorise posts, but have not been very strict with it.
This page only shows the last ten posts. For a comprehensive list, please see the archive.
*_Disclaimer_* All the code quoted in this post is extracted from the Rust compiler source code. Most snippets are annotated with the file name relative to @$RUST_ROOT/src@.
Rust is an awesome language. In the beginning I had a lot of trouble coming to terms with some of the decisions taken in the language specification. But a little over a year later, I must admit that I am enjoying it.
To be honest, it is probably not only the language itself, but the community around it - there are a lot of opinions, but the general tune is "we want to make the best systems programming language possible". And that, of course, will leave some less content and some very content, but all in all I there is a lot of excitement around it.
Recently, I have been interested in writing compiler plugins, as I think they will come in handy for my Master's project. The documentation is a little sparse, only grazing the surface, but it's probably for the better as that whole section of the compiler is still marked as unstable. On the other hand, I doubt it will change drastically before 1.0, as quite a few projects make use of it as it is (to great effect I might add).
Looking through the Rust source code, trying to learn about macro expansions, I naturally find a definition of the different types of syntax extensions available in Rust (file: @libsyntax/ext/base.rs@), defined as @enum SyntaxExtension@:
* @Decorator@: A syntax extension attached to an item, creating new items based on it.
* @Modifier@: Syntax extension attached to an item, modifying it in-place.
* @MultiModifier@: Same as above, but more flexible (whatever that means)
* @NormalTT@: A normal, function-like extension, for example @bytes!@ is one such
* @IdentMacroExpander@: As a @NormalTT@, but has an extra @ident@ before the block.
* @MacroRulesTT@: Represents @macro_rules!@ itself.
How interesting. Then a question popped up: *How is a standard derivable trait such as @Show@ actually derived?*
First of all, in the same file, there is a function defining all the basic syntax extensions, @initial_syntax_expander_table()@ in which we find the following lines:
which tells us that the "derive" functionality is registered as a decorator (which makes sense), and expands to call the function @expand_meta_derive()@. This function is defined in @libsyntax/ext/deriving@ and is not much different from any other syntax extension.
First it checks the node type of @mitem@. If it is not a list or an empty list, an error is emitted. Otherwise all the items are inspected in turn. This gives us exactly what @derive@ can derive:
// File: libsyntax/ext/deriving/mod.rs in function expand_meta_derive()matchtname.get(){"Clone"=>expand!(clone::expand_deriving_clone),"Hash"=>expand!(hash::expand_deriving_hash),"RustcEncodable"=>{expand!(encodable::expand_deriving_rustc_encodable)}"RustcDecodable"=>{expand!(decodable::expand_deriving_rustc_decodable)}"Encodable"=>{cx.span_warn(titem.span,"derive(Encodable) is deprecated \
in favor of derive(RustcEncodable)");expand!(encodable::expand_deriving_encodable)}"Decodable"=>{cx.span_warn(titem.span,"derive(Decodable) is deprecated \
in favor of derive(RustcDecodable)");expand!(decodable::expand_deriving_decodable)}"PartialEq"=>expand!(eq::expand_deriving_eq),"Eq"=>expand!(totaleq::expand_deriving_totaleq),"PartialOrd"=>expand!(ord::expand_deriving_ord),"Ord"=>expand!(totalord::expand_deriving_totalord),"Rand"=>expand!(rand::expand_deriving_rand),"Show"=>{cx.span_warn(titem.span,"derive(Show) is deprecated \
in favor of derive(Debug)");expand!(show::expand_deriving_show)},"Debug"=>expand!(show::expand_deriving_show),"Default"=>expand!(default::expand_deriving_default),"FromPrimitive"=>expand!(primitive::expand_deriving_from_primitive),"Send"=>expand!(bounds::expand_deriving_bound),"Sync"=>expand!(bounds::expand_deriving_bound),"Copy"=>expand!(bounds::expand_deriving_bound),reftname=>{cx.span_err(titem.span,&format!("unknown `derive` \
trait: `{}`",*tname)[]);}}
Straight from the heart (or kidney) of the beast! Not only do we clearly see that @Show@ is supported, mapping to the function @expand_deriving_show@, we also see that it comes with a deprecation warning, and we should prefer @Debug@ over @Show@. At the moment there is no difference, as they both map to the same function.
We are getting close to the end here. Instead of explaining what goes on I am going to quote the entire function @expand_deriving_show@:
This is beautiful! Deriving @Show@ looks a lot like we had written it by hand. We have a trait definition for @std::fmt::Debug@ with no additional bounds nor generics. There is one method called @fmt@ that takes @&self@ (borrowed explicit self) and a pointer to a @std::fmt::Formatter@ as arguments. The return type is @std::fmt::Result@.
This is not the end however, since this does not mention the name of the structure we are trying to derive @Show@ for. This must take place in @trait_def.expand()@. This function expands the trait definition, ensuring that the derived-upon item is either a struct or an enum, taking care of various possible error conditions, juggling lifetimes, generics, @where@ clauses and associated types.
All this boils down to the following item creation:
which I will not even pretend to understand. We can conclude that we end up calling @cx.item@, creating a new item in the AST. The @item()@ method is not defined on @ExtCtxt@ itself, but rather declared in a trait @AstBuilder@, which is implemented for @ExtCtxt@.
// File: libsyntax/ext/build.rsimpl<'a>AstBuilderforExtCtxt<'a>{// ...fnitem(&self,span:Span,name:Ident,attrs:Vec<ast::Attribute>,node:ast::Item_)->P<ast::Item>{// FIXME: Would be nice if our generated code didn't violate// Rust coding conventionsP(ast::Item{ident:name,attrs:attrs,id:ast::DUMMY_NODE_ID,node:node,vis:ast::Inherited,span:span})}// ...}
So there you have it. How @Show@ (or @Debug@) gets derived in Rust. It is a rather long story, with some gaps, but it is very instructive to skip around the compiler infrastructure to see how some of the AST-mangling syntax extensions do their work.
If you stuck with it this far, thanks for reading, hope you enjoyed it.
Workflow note: A fairly common workflow pattern has established itself:
* Create local branch, call it @fx@ for "feature x"
* Work on it for a while (committing frequently)
* Push it to @origin@
* Periodically merge @master@ into it
* Eventually merge it back into @master@
But I tend to forget some of the commands I need to type (especially when dealing with remote tracking branches). This is a quick run-down of the common commands.
bc. $ git checkout -b fx
Creates and checks out @fx@ branch.
The biggest problem sometimes is pushing this new branch to a remote. Very often I'll just do:
bc. $ git push origin fx
which achieves exactly that, but there is no remote-tracking, ie. something like the following is missing from @.git/config@:
bc. [branch "fx"]
remote = origin
merge = refs/heads/fx
which we can fix in a few ways. One way is simply adding the section to your config file, which is probably best to do through the CLI:
bc. $ git config branch.fx.remote origin
$ git config branch.fx.merge refs/heads/fx
or simply be smart enough to include @-u@ when pushing the branch the first time:
bc. $ git push -u origin fx
which takes care of setting exactly these tracking parameters in the configuration.
This little gem showed up while perusing the Rust source code (@src/libstd/io/stdio.rs@):
bc.. And so begins the tale of acquiring a uv handle to a stdio stream on all
platforms in all situations. Our story begins by splitting the world into two
categories, windows and unix. Then one day the creators of unix said let
there be redirection! And henceforth there was redirection away from the
console for standard I/O streams.
After this day, the world split into four factions:
1. Unix with stdout on a terminal.
2. Unix with stdout redirected.
3. Windows with stdout on a terminal.
4. Windows with stdout redirected.
Many years passed, and then one day the nation of libuv decided to unify this
world. After months of toiling, uv created three ideas: TTY, Pipe, File.
These three ideas propagated throughout the lands and the four great factions
decided to settle among them.
The groups of 1, 2, and 3 all worked very hard towards the idea of TTY. Upon
doing so, they even enhanced themselves further then their Pipe/File
brethren, becoming the dominant powers.
The group of 4, however, decided to work independently. They abandoned the
common TTY belief throughout, and even abandoned the fledgling Pipe belief.
The members of the 4th faction decided to only align themselves with File.
tl;dr; TTY works on everything but when windows stdout is redirected, in that
case pipe also doesn't work, but magically file does!
p. I especially like that the TL;DR is located at the bottom.
Credit: From what @git blame@ tells me, the above quote was authored by "Alex Crichton":https://github.com/alexcrichton
Here's a great slide show explaining how to use GNU Autotools for a given project:
* "Autotools Tutorial":https://www.lrde.epita.fr/~adl/dl/autotools.pdf (PDF)
from "this site":https://www.lrde.epita.fr/~adl/autotools.html.
Personally, it it very confusing to set up Autotools for the first time. The introductory texts available online rarely provide a full picture, since Automake and Autoconf are two different tools that just happen to be orchestrated together very often. On top of that you have commands such as @autoheader@ and @autoreconf@ (the latter which is in some places warned against for some reason).
There are even fewer examples on how to set up a library. Apart from the aforementioned tools, there is also Libtool which should alleviate head-aches when building a library.
But to be honest, it all seems to be a matter of taste.
The goals of my project are:
* Building both statically and dynamically linkable libraries (@.a@ and @.so@ respectively)
* Building cross-platform, preferably according to some C standard (to improve portability)
Notable projects that serve as inspiration points are:
* "Tig":http://jonas.nitro.dk/tig/: A text-mode interface for Git
* "GMP":https://gmplib.org/: The GNU Multi Precision Arithmetic library
* "libgit2":https://libgit2.github.com/: A portable, zero-dependencies C implementation of core Git methods
Notably, libgit2's zero dependency and C89 compliant implementation makes is attractive as a source of inspiration, but for their build process, they have for some unfathomable reason chosen CMake instead of @make@.
Rather, in order to achieve the goals listed above, I believe I could make do with @autoconf@ and @autoheader@ (but not @automake@) a la Tig, and generate a @config.make@, which could be fed into the Makefile.
If you want to write a library for distribution on most Un*x-like systems, chances are you'll want to use GNU Autotools. I have for a long time been curious how these tools work, and how the seemingly indecipherable syntaxes of @Makefile.am@ and @configure.ac@ were interpreted. And what is the relationship between @automake@, @autoconf@ and @aclocal@?
The following online book is well worth the read:
* "GNU Autoconf, Automake, and Libtool":https://sourceware.org/autobook/autobook/autobook_toc.html
So far the general idea is the following:
* aclocal generates an @aclocal.m4@ by scanning @configure.ac@ (from the @man@ page)
* autoconf is for generating @./configure@ which figures out the configuration of the installation system; while
* automake is for generating a @Makefile.in@, a Makefile template
A common pattern in C header files is the following:
I think I need to know something about linear types in order to understand session types, so here is some of the reading material I've consulted so far:
* "A taste of linear logic":http://homepages.inf.ed.ac.uk/wadler/papers/lineartaste/lineartaste-revised.pdf (PDF)
A great paper by Philip Wadler, a really good introductory text on linear logic and linear types.
* "Lively Linear Lisp":http://www.pipeline.com/~hbaker1/LinearLisp.html
A more pragmatic direct introduction to linear types (not so much about linear logic). Motivated in a functional setting (with LISP).
* "Introduction to Linear Logic":http://www.brics.dk/LS/96/6/BRICS-LS-96-6.pdf (PDF)
A more (seemingly) authoritative piece on linear logic, but I have not been able to print it yet--the printer outputs some garbled version. The only part that prints nice is the front page.
_*Summary*_ In an effort to come to grips with Rust's module and file organisation, I read all the material I could get my hands on, but none of it provided a good explanation for me. So I'm typing this up as I go, and attempt to provide a minimal example and explanation of how modules can be organised coherently and in an reusable fashion in a Rust codebase.
Working on the "Matasano Crypto Challenge":http://cryptopals.com/ I've built up a little repository of Rust code. Some of this code I would like to re-use, and I would even like for some of the library code to use other parts of the library code. Imagine the following setup:
// File: main1.rsmodfoo;modbar;fnmain(){// Use both 'foo::foo_function()' and 'bar::bar_function' here...}
As the modules @foo@ and @bar@ don't interfere with each other, running @rustc main.rs@ should work. But what if @bar@ would like to use some functionality defined in @foo@? Surely the following changes should work?
// File: bar.rsmodfoo;fnbar_function(v:&Vec<u8>)->Vec<u8>{// Do somethinglet_=foo::foo_function(...)}
But this doesn't compile. Running @rustc main.rs@ gives the following rather cryptic message
> rustc main.rs
bar.rs:1:5: 1:8 error: cannot declare a new module at this location
bar.rs:1 mod foo;
^~~
bar.rs:1:5: 1:8 note: maybe move this module `bar` to its own directory via `bar/mod.rs`
1 mod foo;
bar.rs:1 mod foo;
^~~
bar.rs:1:5: 1:8 note: ... or maybe `use` the module `foo` instead of possibly redeclaring it
bar.rs:1 mod foo;
^~~
The fastest way to fix the above is to group all the @mod@ declarations together in @main.rs@ and write @use foo@ in @bar.rs@ instead. But this method _requires_ the root crate (@main.rs@) to name _all_ of the modules that will be used (even if it doesn't use them directly).
But this solution is not really satisfactory.
It had to happen—of course I'm going to build my own keyboard. Recently I bought some oven-bake clay and starting modelling the layout using some keys I borrowed from a friend.
The layout will be tailored to my hand, with four keys in a column per finger plus three extra for the index finger and another two extra for the pinky. I also plan to have three keys for the thumb (it's actually baffling how little the thumb is actually used in regular day-to-day typing).
Right now it is only the design phase. Having made a model for the right hand, a few things are still missing. First off there is the actual modelling - how could this be built? Originally I wanted to build a model of clay, bake it and somehow make a mold or plastic "shell" from the model. This doesn't appear to be easy though.
I think the simplest "first design" would be to get the design I want first, then complete it on a flat surface and post-pone raised keys for version 2.
Secondly, there are the electronics: I need
* keys (Cherry MX Brown)
* key caps
* wiring
* diodes
* some sort of controller.
For the controller, I originally envisioned using the Arduino Uno, but it would seem a bit like overkill. Instead I'm considering the "Teensy":http://www.pjrc.com/teensy/ or the "McHck":https://mchck.org/
I want to post some design documents—pictures and drawings—soon.
Highly recommended reading
* "Keyboard Matrix Help":http://www.dribin.org/dave/keyboard/one_html/
Key set and key cap stores
Living in the EU, I would prefer to shop in the EU, since taxes can be quite horrendous on imports from the US.
* "The Keyboard Company":http://www.keyboardco.com/index.asp (base in the UK)
I'm redesigning this blog again. In my never-ending quest for good CSS frameworks I have so far tried
* UIKit ("getuikit.com":http://getuikit.com/)
* Foundation ("foundation.zurb.com":http://foundation.zurb.com/)
* Bootstrap ("getbootstrap.com":http://getbootstrap/)
all of them being more or less similar. Bootstrap has the most features and is the easiest to get up and running (with predictably high-quality results) - it is also the hardest to adapt with colours and spacing. The experience with Foundation is more or less the same
Until now UIKit has been the easiest to work with and it also doesn't feature
The next canditate is "Susy":http://susy.oddbird.net/. Susy is different kind of framework, which only provides grids - and only when you want them. It is much less opinionated than the larger frameworks when it comes to styling, but excels at the mathematics involved in aligning and spacing the various elements.
The current layout you see now is built with Susy and is unfinished. The layout features a top navigation bar and footer, and for the blog subsite a narrow 960px frame provides a better reading experience.
I just bought a new keyboard (again), it is the same maker (Cooler Master) like the one I bought last time, but I bought the "Stealth TK" which means it has a ten-key numpad ("picture":http://gaming.coolermaster.com/images/products/76/image_652.jpg). The first thing I noticed as I unboxed it and started typing is that the arrow keys are embedded in the numpad. The coolest feature by far the labelling on the keys: it sits on the sides of the keys instead of on top. It is the closest I have so far come to the sleek, clean look of Das Keyboard.
Not only do the keys look different, they also feel different. Compared to the Quickfire, this keyboard features the Cherry MX Brown switches which have a little tactile bump where the Quickfire sports the Cherry MX Black that have no bump or click at all. So far I think I prefer typing on the Stealth, but I'm not so sure about the numpad being a good idea.
Another difference is the weight. The Quickfire is very solid, and sits firmly on the desk, where the Stealth seems to be much more plasticy. It doesn't weigh as much (although it's larger) - that being said it still seems like a solid keyboard. An interesting (beginner's) project could be moving the keys from the Stealth into the frame of the Quickfire.
Having typed out the above as the first exercise in typing on this beast, I must say I am very pleased with the Cherry MX Brown switches compared to the black version. The little tactile bump gives great feedback and I think I am already typing faster than what I've been doing with the Quickfire.
h3. Building my own keyboard
Keyboards are extremely fascinating. I am always surprised by how little attention people pay to their keyboard. If you work in the IT industry, it is *the* tool you use the most, so why shouldn't it perform its very best?
To that end, I've been searching (and researching) on what makes up a good keyboard. For sure it should be mechanical, the Cherry MX switches are by far the most standard ones (and the best it seems). Furthermore, I think more work could be put into the ergonomics of the thing.
Below I've gathered some resources on keyboards and building keyboards
* "Building a keyboard Part 1":http://blog.keyboard.io/post/77078804805/building-a-keyboard-part-1
* "Building a keyboard Part 2":http://blog.keyboard.io/post/77078933799/building-a-keyboard-part-2
* "mchck.org":https://mchck.org/
* "blog.keyboard.io":http://blog.keyboard.io
* "Humble Hacker Keyboard":http://www.humblehacker.com/keyboard/
* "key64.org":http://www.key64.org/
* "Keyboard Matrix Help":http://www.dribin.org/dave/keyboard/one_html/
* "Teensy USB development board":http://pjrc.com/teensy/index.html
The two blog posts on building a keyboard are great reading if you, like me, are mostly a software guy, but dream of doing something worthwhile with some electronics.
I figure for starters I could use the Arduino Uno which is not being used for anything right now. I also borrowed some keys from a friend, which I could definitely use in a prototype.