Alson Kemp

Hackfoofery

Rust build/install: permission denied

without comments

For the benefit of humanity… I just got:

error: failed to run custom build command for log v0.4.11

Caused by:<br>could not execute process /tmp/cargo-installGs2WwM/release/build/log-a97e745b31d3670c/build-script-build (never executed)

Caused by:<br>Permission denied (os error 13)

This was caused by noexec on my /tmp/ partition. Update /etc/fstab and mount -a -o remount

Written by alson

September 9th, 2020 at 4:40 pm

Posted in Rust

Cast Iron Skillets: meditations on seasoning

without comments

TL;DR: Do your worst and hack-and-slash at your cast iron skillet or carefully season it and exhibit it to your friends. Doesn’t matter; it’ll still be a cast iron skillet. Also, don’t kill seagulls and seals.

Not sure anyone will read this but I don’t particularly care: it’s a bit of light in the darkness of Covid.

I do everything with my cast iron pans. Tomato, wine, whatever. In contrast, many cast iron aficionados advocate loads of rules about how to use cast iron:

  • Don’t wash it with soap!
  • Don’t scrub it!
  • Don’t cook acidic foods in it!
  • In fact, don’t look at it! That’s quite enough!

Sure, follow those if you take an impractical view of very practical cast iron cookware. These remind me of so many little facts which are exploded into hard and fast rules, which are then passionately advocated without regard to the rule’s utility… Cast iron seasoning zealotry…

What’s the absolute worst thing you can do to a cast iron skillet? Remove the seasoning. Maybe let it rust a bit? Guess what? You still have a frickin’ cast iron skillet. These things get picked up from junk yards, left outside in the rain, whatever. How are people cooking on their great-grandmas’ cast iron skillets if these things are so delicate? Settlers carried these things on pack mules up yonder pass and all that. It’s fancy pig iron but it’s basically pig iron.

Seasoning is certainly important for cooking but most folks seem to think seasoning is important for showing. Seasoning is easily reapplied and can be applied however well and however often you desire. I don’t desire to do either much, so I don’t: oops, I forgot to clean out the skillet last night. Guess what? I still have a great frickin’ cast iron skillet; gave it a rinse but that wasn’t enough; scrubbed the hell out of it and removed the seasoning… Guess what? You still have a great frickin’ cast iron skillet; left it in the garage for five years? Guess what? You still have a great frickin’ cast iron skillet.

If the skillet looks a little sad, I scrub it clean of any sticky bits (and probably a bunch of seasoning), rub some vegetable oil (blah, blah, flax seed oil) around on it with a paper towel and throw it in the oven for an hour (but I forget and it sits in a cold oven overnight…) or leave it on the stove with a low gas flame for 30 minutes. I dunno. It’s pretty non-stick and I don’t show it to people expecting them to gaze jealously at my skillet’s perfect seasoning.

The alternative to a bit of fiddling with cast iron is regular non-stick pans but the non-stick coating never seems to last that long and I want to use metal utensils to hack at my food. Then I have to throw away the no-longer-non-stick non-stick pan and I’m sure someone will tell me a seagull or seal will eat it and die. That seems like a bad thing so I just keep banging on my cast iron skillet and then re-seasoning it. With just 5 minutes per month you, too, could save a seagull or seal. Don’t be a bad person who kills seagulls and seals. Also, don’t be a cast iron seasoning missionary (yes, I see the irony).

Useful links:

Written by alson

August 16th, 2020 at 12:37 pm

Posted in Uncategorized

A Bit of A Continuation for Moore’s Law?

without comments

Note: CPU references in this post are all to Intel CPU. Other CPU families took similar paths but did so with different timelines and trade-offs (e.g. the inclusion of FPU and/or MMU functionality in the CPU).

First, a historical ramble…

What follows is accurate enough for what follows…

Much as with so much on the web, Moore’s Law had a specific origin but has been through a number of updates/revisions/extensions to remain relevant to those who want it to remain relevant. Originally, it was about the number of transistors that could be built into a single semiconductor product. Presumably that number got awfully large and was meaningless to most people (transistor?), so Moore’s Law was sort of retooled to refer to compute capability (MIPS, FLOPS) or application performance (frames per second (in a 3D video game), TPC-* (for databases), etc. If your widget was getting faster, then there was “an [Moore’s Law] for that” (to paraphrase Apple). And Moore saw and he was pleased.

But really all the faster-being was, of course, under pinned by the various dimensions of scaling for semiconductors. Processors (the things most people care about the most) are made using MOSFETs (a very common type of transistor used to build processors/logic, but a bit different those in the original Moore’s Law) and Robert Dennard wrote a paper noting that MOSFETs have particular scaling properties. See Dennard Scaling: “[if] transistor dimensions could be scaled by 30% (0.7x) every technology generation, thus reducing their area by 50%. This would reduce circuit delays by 30% (0.7x) and therefore increase operating frequency by about 40% (1.4x). Finally, to keep the electric field constant, voltage is reduced by 30%, reducing energy by 65% and power (at 1.4x frequency) by 50%”. This was also known as “triple scaling” as it implied that three scaling factors would simultaneously improve: geometry decrease (density), frequency increase and power decrease (for equivalent functionality).

Read the rest of this entry »

Written by alson

December 31st, 2019 at 10:50 pm

Posted in Geekery

Science Fiction: The Economics of Star Travel

without comments

While I’m a fan of Alastair Raynold‘s science fiction and recently finished Poseidon’s Wake, I’m rather unsure of his treatment of interstellar travel. Within reasonable bounds, making allowances for the fact that it’s science fiction (hey, Conjoiner drives) and recognizing that he, not I, is a bona fide rocket scientist, his treatment of how to conduct interstellar travel seems realistic and sobering, though perhaps not sobering enough…

So let’s talk about money now and then…

Economics/Finance Backgrounder

The problem of how much money to spend now in order to reap a future gain is well studied in economics and/or finance. A discount rate is used to forward or backward project financial amounts, recognizing that $1 gained or spent at a future date is not valued at $1 now. For example, assume you had $10 and could invest it at a 5% rate in a completely instrument (say a bank bond) (you can’t right now, hey thanks Fed, but let’s assume that you could…). After 1 year, you’d have $10.50. Likewise, if I needed $10 now, you could lend me the $10 but you’d want me to promise to return you more than a total of $10.50 after one year. You wouldn’t lend it to me for less than $10.50 because you could just lend it to a bank or government via a bond and get back $10.50. I’m riskier than a bank or a government so you’d want more from me than from a government or bank. Simple.

Read the rest of this entry »

Written by alson

July 3rd, 2019 at 2:40 am

Posted in Geekery

On Euthanizing A Companion Animal

without comments

[This is rather off-topic but it’s cathartic and might be helpful to someone.]

We recently euthanized a much beloved family cat. The process was both straightforward and bewildering. Herewith, notes on our experience along with suggestions about how we might approach it differently in the future.

Mechanics

This is about the mechanics of euthanizing a particular animal.  It should be applicable to larger animals in different environments.  Emotional and spiritual aspects are not addressed; those are difficult enough but not understanding the mechanics of the process only compounds the difficulty.

The Animal

He was a gregarious and happy cat, though he was a little “well fed”. In the last month or two, he’d looked rather slimmer, had taken to “hiding” in unused rooms and then to snuggling aggressively, was not eating or drinking as he normally would. Tests were done, nothing was found and the downward spiral continued over the next few weeks.

We took him to another vet. They looked at his teeth, listened to his heart, squeezed his belly… and said we’re going to take him in the back room for a moment. They came back with ultrasound pictures (no charge) of a significant tumor.

At this point, the discussion turned to heroic (tumor resection + chemo) and/or palliative measures (he might be comfortable for a few more weeks with prednisone), no doubt to assure the pet owners that euthanasia was not the only option. This discussion was quickly cut off: we appreciate the situation, we know his condition, we know where this ends, further pain is not warranted.

Read the rest of this entry »

Written by alson

June 7th, 2019 at 4:07 pm

Posted in Uncategorized

Organizing Terraform Projects

without comments

At Teckst, we use Terraform for all configuration and management of infrastructure.  The tidal boundary at the intersection of infrastructure and application configuration is largely determined by which kinds of applications will be deployed on which kinds of infrastructure.  Standing up a bunch of customized EC2 instances (e.g.Cassandra)?  Probably something that Ansible or Chef is better suited to as they’re built to step into a running instance and create and/or update its configuration (though, certainly, this can be done via Terraform and EC2 User Data scripts).

Teckst uses 100% AWS services and 100% Lambda for compute so we have a much more limited need.  We need Lambda Functions, API Gateways, SQS Queues, S3 Buckets, IAM Users, etc to be created and wired together; thereafter, our Lambda Function are uploaded by our CI system and run over the configured AWS resources.  In this case, Terraform is perfect for us as it walks our infrastructure right up to the line at which our Lambda Functions take over.

Terraform’s documentation provides little in the way of guidance on structuring larger Terraform projects.  The docs do talk about modules and outputs, but no fleshed-out examples are provided for how you should structure your project.  That said, many guides are available on the web ([1][2][3] are the top three Google results as of this writing).

Terraform Modules

Terraform Modules allow you to create modules of infrastructure which accept/require specific Variables and yield specific Outputs.  Besides being a great way to utilize third-party scripts (e.g. a script you find on Github to build a fully configured EC2 instance with Nginx fronting a Django application), Modules allow a clean, logical separation between environments (e.g. Production and Staging).  A good example of organizing a project using Terraform Modules is given in this blog post.  Initially, we approached organizing scripts similarly:

/prod/main.tf
/prod/vpc.tf - production configs for VPC module 
/staging/main.tf
/staging/vpc.tf - staging configs for VPC module
/modules/vpc/main.tf - contains staging configs for VPC module
/modules/vpc/variables.tf
/modules/vpc/outputs.tf

Now all of our prod configuration values are separate from our staging configuration values.  The prod and staging scripts could reference our generic vpc Module.  Initially, this seemed like a huge win.  Follow on to find out how it might not be a win for in-house-defined infrastructure.

Read the rest of this entry »

Written by alson

February 14th, 2018 at 11:26 am

Posted in Best Practices

Simple Decoders for Kids

with one comment

My wife created simple symbol-letter decoders for my son.  He thought they were a lot of fun and wanted to share them with friends, so I productized them.  Screenshot here:

Screenshot from 2014-02-27 12:12:45

Simple, straightforward way to build fun little puzzles for kids.   Play with it here.  Besides changing the phrase, you can add additional confounding codes or remove codes to force kids to guess at the phrase.  Then click the Print button and you’ll have a nice printout with the control panel hidden.

I’m building a 2-D version for the codes, too, so that will be along later this week.

Written by alson

February 25th, 2014 at 10:25 pm

Posted in Geekery

WebGL Fractals

without comments

Years ago, I wrote a fractal generator/explorer for OpenGL.  Crazily enough, after nearly 10 years,  it still compiles without complaint on Linux.  But the web is the future [er… or, rather, the present], so…

So I ported the the C version to Coffeescript, AngularJS, LESS, Jade and [insert buzzword].  The port was actually very straightforward with the majority of time spent on building the UI, fiddling with AngularJS, adding fractals, refactoring, etc.  Nothing in the code is too surprising.  One controller handles the UI, two services manage application state and one service renders the fractal.

The app is here.  The code is on GitHub here.  To “compile” the code, you’ll need the NodeJS compilers for Coffeescript, LESS and Jade.  Then run ./scripts/run_compilers.sh.  (Yes, I could have used Grunt or Gulp, but the simple bash script is really simple.)

Screenie:

 web-fract-3d

 

Interesting links:

  1. Link
  2. Link
  3. Link
  4. Link
  5. Link
  6. Link

Pull requests, comments, suggestions, etc always welcome.  In particular, are there other fractals that you’d suggest?

Written by alson

December 28th, 2013 at 4:00 pm

Posted in Geekery

Go experiment: de-noising

with 2 comments

CoffeeScript is a great example of how to de-noise a language like Javascript. (Of course, I know people that consider braces to be a good thing, but lots of us consider them noise and prefer significant whitespace, so I’m speaking to those folks.) What would Go code look like with some of CoffeeScript’s denoising?

TL;DR : the answer is that de-noised Go would not look much different than normal Go…

As an experiment, I picked some rules from CoffeeScript and re-wrote the Mandelbrot example from The Computer Benchmarks Game. Note: this is someone else’s original Go code, so I can’t vouch for the quality of the Go code….

Here’s the original Go code:

/* targeting a q6600 system, one cpu worker per core */
const pool = 4

const ZERO float64 = 0
const LIMIT = 2.0
const ITER = 50   // Benchmark parameter
const SIZE = 16000

var rows []byte
var bytesPerRow int

// This func is responsible for rendering a row of pixels,
// and when complete writing it out to the file.

func renderRow(w, h, bytes int, workChan chan int,iter int, finishChan chan bool) {

   var Zr, Zi, Tr, Ti, Cr float64
   var x,i int

   for y := range workChan {

      offset := bytesPerRow * y
      Ci := (2*float64(y)/float64(h) - 1.0)

      for x = 0; x < w; x++ {
         Zr, Zi, Tr, Ti = ZERO, ZERO, ZERO, ZERO
         Cr = (2*float64(x)/float64(w) - 1.5)

         for i = 0; i < iter && Tr+Ti <= LIMIT*LIMIT; i++ {
            Zi = 2*Zr*Zi + Ci
            Zr = Tr - Ti + Cr
            Tr = Zr * Zr
            Ti = Zi * Zi
         }

         // Store the value in the array of ints
         if Tr+Ti <= LIMIT*LIMIT {
            rows[offset+x/8] |= (byte(1) << uint(7-(x%8)))
         }
      }
   }
   /* tell master I'm finished */
   finishChan <- true

My quick de-noising rules are:

  • Eliminate var since it can be inferred.
  • Use ‘:’ instead of const (a la Ruby’s symbols).
  • Eliminate func in favor of ‘-> and variables for functions.
  • Replace braces {} with significant whitespace
  • Replace C-style comments with shell comments “#”
  • Try to leave other spacing along to not fudge on line count
  • Replace simple loops with an “in” and range form

The de-noised Go code:

# targeting a q6600 system, one cpu worker per core
:pool = 4

:ZERO float64 = 0  # These are constants
:LIMIT = 2.0
:ITER = 50   # Benchmark parameter
:SIZE = 16000

rows []byte
bytesPerRow int

# This func is responsible for rendering a row of pixels,
# and when complete writing it out to the file.

renderRow = (w, h, bytes int, workChan chan int,iter int, finishChan chan bool) ->

   Zr, Zi, Tr, Ti, Cr float64
   x,i int

   for y := range workChan
      offset := bytesPerRow * y
      Ci := (2*float64(y)/float64(h) - 1.0)

      for x in [0..w]
         Zr, Zi, Tr, Ti = ZERO, ZERO, ZERO, ZERO
         Cr = (2*float64(x)/float64(w) - 1.5)

         i = 0
         while i++ < iter && Tr+Ti <= LIMIT*LIMIT
            Zi = 2*Zr*Zi + Ci
            Zr = Tr - Ti + Cr
            Tr = Zr * Zr
            Ti = Zi * Zi

         # Store the value in the array of ints
         if Tr+Ti <= LIMIT*LIMIT
            rows[offset+x/8] |= (byte(1) << uint(7-(x%8)))
   # tell master I'm finished
   finishChan <- true

That seems to be a pretty small win in return for a syntax adjustment that does not produce significantly enhanced readability. Some bits are nice: I prefer the significant whitespace, but the braces just aren’t that obtrusive in Go; I do prefer the shell comment style, but it’s not a deal breaker; the simplified loop is nice, but not incredible; eliding “var” is okay, but harms readability given the need to declare the types of some variables; I do prefer the colon for constants. Whereas Coffeescript can dramatically shorten and de-noise a Javascript file, it looks as though Go is already pretty terse.

Obviously, I didn’t deal with all of Go in this experiment, so I’ll look over more of it soon, but Go appears to be quite terse already given its design…

Written by alson

May 13th, 2013 at 11:34 pm

Posted in Programming

[Synthetic] Performance of the Go frontend for GCC

with 8 comments

First, a note: this is a tiny synthetic bench.  It’s not intended to answer the question: is GCCGo a good compiler.  It is intended to answer the question: as someone investigating Go, should I also investigate GCCGo?

While reading some announcements about the impending release of Go 1.1, I noticed that GCC was implementing a Go frontend.  Interesting.  So the benefits of the Go language coupled with the GCC toolchain?  Sounds good.  The benefits of the Go language combing with GCC’s decades of x86 optimization?  Sounds great.

So I grabbed GCCGo and built it.  Instructions here: http://golang.org/doc/install/gccgo

Important bits:

  • Definitely follow the instructions to build GCC in a separate directory from the source.
  • My configuration was:

/tmp/gccgo/configure --disable-multilib --enable-languages=c,c++,go

I used the Mandelbrot script from The Benchmarks Game at mandlebrot.go.  Compiled using go and gccgo, respectively:

go build mandel.go
gccgo -v -lpthread -B /tmp/gccgo-build/gcc/ -B /tmp/gccgo-build/lto-plugin/ \
  -B /tmp/gccgo-build/x86_64-unknown-linux-gnu/libgo/ \
  -I /tmp/gccgo-build/x86_64-unknown-linux-gnu/libgo/ \
  -m64 -fgo-relative-import-path=_/home/me/apps/go/bin \
  -o ./mandel.gccgo ./mandel.go -O3

Since I didn’t install GCCGo and after flailing at compiler options for getting “go build” to find includes, libraries, etc, I gave up on the simple “go -compiler” syntax for gccgo. So the above gccgo command is the sausage-making version.

So the two files:

4,532,110 mandel.gccgo  - Compiled in 0.3s
1,877,120 mandel.golang - Compiled in 0.5s

As a HackerNewser noted, stripping the executables could be good. Stripped:

1,605,472 mandel.gccgo
1,308,840 mandel.golang

Note: the stripped GCCGo executables don’t actually work, so take the “stripped” value with a grain of salt for the moment. Bug here.

GCCGo produced an *unstripped* executable 2.5x as large as Go produced. Stripped, the executables were similar, but the GCCGo executable didn’t work. So far the Go compiler is winning.

Performance [on a tiny, synthetic, CPU bound, floating point math dominated program]:

time ./mandel.golang 16000 > /dev/null 

real  0m10.610s
user  0m41.091s
sys  0m0.068s

time ./mandel.gccgo 16000 > /dev/null 

real  0m9.719s
user  0m37.758s
sys  0m0.064s

So GCCGo produces executables that are about 10% faster than does Go, but the executable is nearly 3x the size.  I think I’ll stick with the Go compiler for now, especially since the tooling built into/around Go is very solid.

Additional notes from HN discussion:

  • GCC was 4.8.0.  Go was 1.1rc1.  Both AMD64.

Written by alson

May 5th, 2013 at 2:35 pm

Posted in Programming