Posts tagged Author: Gabriel Scherer
This year I reviewed many ICFP submissions, and got to be on the receiving end of equally many author responses (also sometimes called, somewhat combatively, rebuttals). I found that there was a large difference between the official written advice on author responses and what I, as a reviewer reading the responses, found effective. In particular, I now believe that limiting yourself to 500 words should strongly be avoided — we should even stop giving that advice.
In this blog post, reporting on a collaboration with Li-Yao Xia, I will show an example of how some results that we traditionally think of as arising from free theorems / parametricity can be established in a purely “syntactic” way, by looking at the structure of canonical derivations. More precisely, I prove that \(
\newcommand{\List}[1]{\mathsf{List}~#1}
\newcommand{\Fin}[1]{\mathsf{Fin}~#1}
\newcommand{\Nat}[1]{\mathbb{N}}
\newcommand{\rule}[2]{\frac{\displaystyle \array{#1}}{\displaystyle #2}}
\newcommand{\judge}[2]{{#1} \vdash {#2}}
\newcommand{\emptyrule}[1]{\begin{array}{c}\\[-1em] #1 \end{array}}
∀α. \List α → \List \alpha
\) is isomorphic to \(
Π(n:\Nat{}). \List{(\Fin{n})}
\) where \(\Fin{n}\) is the type of integers smaller than \(n\), corresponding to the set \(\{0, 1, \dots, n-1\}\).
PRL recently produced three papers for the SNAPL conference.
- Linking Types for Multi-Language Software: Have Your Cake and Eat It Too, by Daniel Patterson and Amal Ahmed.
- Search for Program Structure, by Gabriel Scherer
- Migratory Typing: Ten Years Later, by Sam Tobin-Hochstadt, Matthias Felleisen, Robert Bruce Findler, Matthew Flatt, Ben Greenman, Andrew M. Kent, Vincent St-Amour, T. Stephen Strickland and Asumu Takikawa
I believe that bullets are one of the most impactful features of recent versions of Coq, among those that non-super-expert users can enjoy. They had a big impact on the maintainability of my proofs. Unfortunately, they are not very well-known, due to the fact that some introductory documents have not been updated to use them.
Bullets are a very general construction and there are several possible ways to use them; I have iterated through different styles. In this post I will give the general rules, and explain my current usage style.
How do researchers know whether they are doing “enough” or “too many” reviews? A measurable goal is to be review-neutral: to have demanded, through our submissions, as many reviews as we have produced as reviewers.
Max New, Daniel Patterson and Ben Greenman recently wrote three two-page abstracts on what they are working on right now. Come have a look — and any feedback is welcome!
In the early days of the famous Emacs/Vim debates, Emacs was often ridiculed for its bulkiness (Eight Megabytes-of-RAM And Constantly Swapping, etc.). The computational power of our computer has grown much faster than Emacs’ bloat: it takes exactly one second to load on my machine. However, our workflows have also changed, and my workflow implies frequently starting new text editors — on each git commit for example, or when I use a Firefox extension to edit a textarea content in a proper editor.
In this blog post, I describe how to use emacsclient
to reuse an existing Emacs process when creating a new editor window, which reduces editor startup times from 1s to 0.150s on my machine.
Are you interested in printed conference Proceedings? We have a good stack of them left away at Northeastern University (Boston, MA) and it seems that nobody wants them!
If you are a student, you should consider applying to become an ICFP 2016 student volunteer! The deadline for application is July 31st, 2016.
James Fisher has a blog post on a case where GHC’s runtime system imposed unpleasant latencies on their Haskell program:
Low latency, large working set, and GHC’s garbage collector: pick two of three
The blog post proposes a very simple, synthetic benchmark that exhibits the issue — basically, latencies incurred by copy time — with latencies of 50ms that are considered excessive. I thought it would be amusing to reproduce the synthetic benchmark in OCaml and Racket, to see how other GCs handle this.
Without further ado, the main take-away are as follows: the OCaml GC has no issue with large objects in its old generation, as it uses a mark&sweep instead of copying collection, and exhibits less than 3ms worst-case pauses on this benchmark.
The Racket GC also does not copy the old generation, but its incremental GC is still in infancy (compared to the throughput-oriented settings which works well) so the results are less good. It currently suffer from a “ramp-up” effect that I will describe, that causes large pauses at the beginning of the benchmark (up to 120ms latency), but in its steady state the longest pause are around 22ms.
Please keep in mind that the original benchmark is designed to exercise a very specific workflow that exercises worst-case behavior for GHC’s garbage collector. This does not mean that GHC’s latencies are bad in general, or that the other tested languages have smaller latencies in general.
The implementations I use, with a Makefile encapsulating the logic for running and analyzing them, are available in a Gitlab repository: