Friday, March 24, 2023

A start job is running for Wait for Network to be Configured

Recently installed Ubuntu Server 22.04 on an ODroid H2 (which has 2 2.5gbe ports), and every time the server reboots there's a 2 min pause with the message "A start job is running for Wait for Network to be Configured" while it counts up.

Luckily, it's an easy fix

Apparently the installer adds both interfaces to the netplan, so it happily waits for DHCP to assign a network address for the full two minutes (even if there's no ethernet cable attached)

To fix, edit /etc/netplan/00-installer-config.yaml and remove the interface you're not using.

For example, the file contents will look like the following:

# This is the network config written by 'subiquity'
network:
  ethernets:
    enp2s0:
      dhcp4: true
    enp3s0:
      dhcp4: true
  version: 2
Since there is only a network cable connected to 'enp2s0', we can simply remove the 'enp3s0' lines. (note: only changing the dhcp4 value to 'false' does NOT resolve the issue, removing the lines does)

Saturday, March 11, 2023

Broken signatures

What happens if you leave an Ubuntu machine running for a couple years and then try to update? Sometimes signatures expire...
Err:6 http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/1.19/xUbuntu_20.04  InRelease
  The following signatures were invalid: EXPKEYSIG 4D64390375060AA4 devel:kubic OBS Project <devel:kubic@build.opensuse.org>
To resolve: (from https://github.com/containers/podman.io/issues/296#issuecomment-1455207534)
wget -qO - https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_22.04/Release.key | sudo apt-key add -
Err:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY B53DC80D13EDEF05
To resolve: (from https://github.com/kubernetes/release/issues/1982#issuecomment-1415573798)
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | gpg --dearmor | sudo dd status=none of=/usr/share/keyrings/kubernetes-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update

Monday, July 11, 2022

Java's Project Panama and Rust - Simple example

There are a couple good guides to getting started with Foreign Function Interfaces (FFI) in Java 19, but since it's a feature preview, everything is subject to change and the ones I found were already out of date. 

 Here are my notes for getting a trivial Java program to call a Rust library:
  •   Install Java 19 Early Access (recommended: sdk install java 19.ea.29-open)
  •  Install jextract (it's not part of openJDK, instead look here: https://github.com/openjdk/jextract)
  •  Create the Rust program (TODO details, cargo init --lib) 
Cargo.toml
[package]
name ="myrustlibrary"
version = "0.1.0"
edition = "2021"

[dependencies]

[lib]
crate_type = ["cdylib"]

[build-dependencies]
cbindgen = "0.20.0"

Add a build.rs file, we want to use cbindgen to create the lib.h file we'll use with jextract.
extern crate cbindgen;

use std::env;

fn main() {
    let crate_dir = env::var("CARGO_MANIFEST_DIR").unwrap();

    cbindgen::Builder::new()
        .with_crate(crate_dir)
        .with_language(cbindgen::Language::C)
        .generate()
        .expect("Unable to generate bindings")
        .write_to_file("lib.h");
}

Add the src/lib.rs contents, for simplicity we'll just echo the PID

use std::process;

#[no_mangle]
pub extern "C" fn rust_get_pid() -> u32 {
    return process::id();
}

Now build it: cargo build

Important, keep track of where Cargo built your lib. 'lib.h' will be in the base folder, and the lib itself will be in the 'target/debug' folder (libmyrustlibrary.d libmyrustlibrary.dylib if you're on a Mac)

Run jextract on the lib.h file

  ./jextract  -t org.rust -l myrustlibrary --output classes ./lib.h
  

Now there will be a bunch of class files in the classes/org/rust dir

Write a Java program to make use of the header file we created from rust (lib.h)

import static org.rust.lib_h.*;	// notice this is the target package we specified when running jextract

public class Main {
  public static void main(String[] args){
    System.out.println("🦀 process id = " + rust_get_pid());
  }
}

And finally, tie it all together

  ./java --enable-preview --source 19 --enable-native-access=ALL-UNNAMED  -Djava.library.path=./target/debug -cp classes Main.java

🦀 process id = 5526
  

Saturday, April 24, 2021

Rough guide to upgrading k8s cluster w/ kubeadm

This is not the best way, just a way that works for me given the cluster topography I have (which was installed using kubeadm on ubuntu, and includes a non-HA etcd running in-cluster). On the control plane / master node: 1) Backup etcd (manually) You might need the info from the etcd pod (`kubectl -n kube-system describe po etcd-master`) to find the various certs/keys/etc, but really they're probably just at /etc/kubernetes/pki/etcd/
kubectl exec -n kube-system etcd-kmaster -- etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --key=/etc/kubernetes/pki/etcd/server.key --cert=/etc/kubernetes/pki/etcd/server.crt snapshot save /var/lib/etcd/snapshot.db
--ignore-daemonsets Backup important files locally (but really, these should also be backed-up on a different server)
mkdir $HOME/backup
sudo cp -r /etc/kubernetes/pki/etcd $HOME/backup/
sudo cp /var/lib/etcd/snapshot.db $HOME/backup/$(date +%Y-%m-%d--%H-%M)-snapshot.db
sudo cp /$HOME/kubeadm-init.yaml $HOME/backup
Figure out what we're going to upgrade to. Do NOT attempt to skip minor versions (i.e. go from 1.19 -> 1.20 -> 1.21, not 1.19 - 1.21)
sudo apt update
sudo apt-cache madison kubeadm
sudo kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.6", GitCommit:"8a62859e515889f07e3e3be6a1080413f17cf2c3", GitTreeState:"clean", BuildDate:"2021-04-15T03:26:21Z", GoVersion:"go1.15.10", Compiler:"gc", Platform:"linux/amd64"}

I'm going to go from 1.19.6-00 to 1.20.6-00 because that's what's currently available (and then from 1.20.6-00 to 1.21.0-00)

Remove the hold on kubeadm, update it, then freeze it again.

sudo apt-mark unhold kubeadm
sudo apt-get install -y kubeadm=1.20.6-00
sudo apt-mark hold kubeadm
Make sure it worked
sudo kubeadm version

kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.6", GitCommit:"8a62859e515889f07e3e3be6a1080413f17cf2c3", GitTreeState:"clean", BuildDate:"2021-04-15T03:26:21Z", GoVersion:"go1.15.10", Compiler:"gc", Platform:"linux/amd64"}
Cordon and drain the master node (I've got a pod using local storage, so that extra flag is necessary)
kubectl cordon kmaster
kubectl drain kmaster --ignore-daemonsets --delete-local-data
Check out the upgrade plan. I get two options, upgrade to latest in the v1.19 series (1.19.10) or upgrade to latest stable version (1.20.6)
sudo kubeadm upgrade plan
sudo kubeadm upgrade apply v1.20.6
Nothing else needed to be upgraded, so I saw
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.20.6". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
It's still going to show 1.19.6, which is expected
kubectl get no
NAME        STATUS                     ROLES                  AGE    VERSION
kmaster     Ready,SchedulingDisabled   control-plane,master   128d   v1.19.6
kworker01   Ready                      none                   125d   v1.19.6
Now to upgrade kubelet and kubectl to the SAME version as kubeadm
sudo apt-mark unhold kubelet kubectl
sudo apt-get install -y  kubelet=1.20.6-00 kubectl=1.20.6-00
sudo apt-mark hold kubelet kubectl
sudo systemctl daemon-reload
sudo systemctl restart kubelet.service
Now we should see the master node running the updated version
kubectl get no
NAME        STATUS                     ROLES                  AGE    VERSION
kmaster     Ready,SchedulingDisabled   control-plane,master   128d   v1.20.6
kworker01   Ready                      none                   125d   v1.19.6
Uncordon it, and make sure it shows 'Ready' Now drain the worker(s) and then repeat roughly the same process on the worker nodes (and yes, the --force is necessary because I'm running something that isn't set up correctly or playing nicely - I'm looking at you operatorhub)
kubectl drain kworker01 --ignore-daemonsets --delete-local-data --force
On the worker node(s)
sudo apt-mark unhold kubeadm
sudo apt-get install -y kubeadm=1.20.6-00
sudo apt-mark hold kubeadm

sudo kubeadm upgrade node

sudo apt-mark unhold kubelet kubectl
sudo apt-get install -y  kubelet=1.20.6-00 kubectl=1.20.6-00
sudo apt-mark hold kubelet kubectl

sudo systemctl daemon-reload
sudo systemctl restart kubelet.service
Back on the master node, we should be able to get the nodes and see that the worker is upgraded. Since it is, we can uncordon it, and it should switch to 'Ready'
kubectl get no
NAME        STATUS                     ROLES                  AGE    VERSION
kmaster     Ready                      control-plane,master   128d   v1.20.6
kworker01   Ready                      none                   125d   v1.20.6
That's it! Rinse and repeat for 1.21 once the entire cluster is on 1.20

Thursday, April 1, 2021

Mysql connection error

This was a mildly interesting one. I run some applications on my laptop that talk to a k8s cluster in my office, including a mysql instance. The main application started failing with the common "The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server" error. The app had been running earlier today. Debugging it, the first step is always the logs.
  kubectl logs mysql-57f577f4b9-gvtlz
Lo and behold, a bunch of suspicious errors:
2021-03-10T02:49:19.349769Z 149591 [ERROR] Disk is full writing './mysql-bin.000015' (Errcode: 15781392 - No space left on device). Waiting for someone to free space...
2021-03-10T02:49:19.349823Z 149591 [ERROR] Retry in 60 secs. Message reprinted in 600 secs
2021-03-10T02:58:46.658696Z 151120 [ERROR] Disk is full writing './mysql-bin.~rec~' (Errcode: 15781392 - No space left on device). Waiting for someone to free space...
2021-03-10T02:58:46.658728Z 151120 [ERROR] Retry in 60 secs. Message reprinted in 600 secs
2021-03-10T02:59:19.352777Z 149591 [ERROR] Disk is full writing './mysql-bin.000015' (Errcode: 15781392 - No space left on device). Waiting for someone to free space...
2021-03-10T02:59:19.354093Z 149591 [ERROR] Retry in 60 secs. Message reprinted in 600 secs
2021-03-10T03:04:46.886946Z 151120 [ERROR] Error in Log_event::read_log_event(): 'read error', data_len: 61, event_type: 34
Looks like the bin logs have finally filled up the volume. Unfortunately, I created that pod with a rather small PVC, and since I'm using OpenEBS, it won't easily resize. What to do? Log into the instance and clean out the logs...
  kubectl exec -it mysql-57f577f4b9-gvtlz -- /bin/sh
  rm /var/lib/mysql/mysql-bin*
Problem solved! (well, temporarily, until they fill up again)

Sunday, March 28, 2021

Rust app using build container and distroless

Turns out that, just like with Golang, it's really quite simple to craft a small container image for a Rust app. Taking a trivial "hello world" app using Actix, we can use a multi-stage build, and then one of the Google distroless container images as a base, to build a tiny final image. Dockerfile:
FROM rust:1.51 as builder
LABEL maintainer="yourname@whatever.com"

WORKDIR /app
COPY . /app
RUN cargo build --release

FROM gcr.io/distroless/cc-debian10
COPY --from=builder /app/target/release/hello-world /
EXPOSE 1111
CMD ["./hello-world"]
Don't forget to include a .dockerignore file at the same level as your Dockerfile (even if you're using podman/buildah - they will respect the .dockerignore). At a minimum, there's no need to include the git directories in the build context: .dockerignore
.git
target/
Finally, build your image:
docker build -t hello-world .
Although the build container (rust:1.51) is rather large, 1.27GB, and the intermediate images somehow balloon to 2.5GB, the final image is only ~30MB

Wednesday, December 9, 2020

Hybrid Kubernetes cluster (arm/x86)

This will be a long post (or maybe multiple posts). The end result will be a 7 node Kubernetes cluster capable of running both x86 and arm64 workloads. Hardware * 2 Odroid H2+ nodes (one master and one worker) * 5 Raspberry Pi 4 nodes (all workers) * network switch * misc (poweradapters / cat6e cables / etc) Software Stack Ubuntu 20.04 Kubernetes 1.19 CRI-O First up, the master node. These Odroid H2+ SBCs are pretty awesome (TODO link to specs). They include two Realtek 2.5gbe ethernet ports, but one minor drawback is that you need to install the drivers before they work making the install a little trickier. Odroid has a good wiki page dedicated to this issue (https://wiki.odroid.com/odroid-h2/application_note/install_ethernet_driver_on_h2plus), but the absolute easiest thing to do is to share your phones internet connection via USB (it will be picked up automatically). Since I am going to install two nodes (at least), I figured I'd do something a little better. Enter CUBIC (Custom Ubuntu ISO Creator - https://launchpad.net/cubic), a GUI wizard to create a customized Ubuntu Live ISO image. There are lots of tutorials on the internet explaining how to use cubic, so I won't go into details, but basically we need to download the Ubuntu 20.04 ISO and open it up in cubic. One of the steps in the cubic wizard will drop you into a shell, at which point you'll want to follow the instructions on the Odroid wiki page above to add the hardkernel PPA and install the drivers (realtek-r8125-dkms) into the image. At this point, you might as well add CRI-O (following the instructions on their site):
export OS=xUbuntu_20.04
export VERSION=1.19
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list

curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | apt-key add -
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add -

apt-get update
apt-get install cri-o cri-o-runc
Finish up the wizard, and create the new Ubuntu live image. Burn that image (balenaEtcher works well for this) onto a USB thumbdrive, and pop it into the first H2+ node. Install Ubuntu, picking whatever options you want... (I'm trying out ZFS, which is still experimental in 20.04). Since we already installed the drivers, you should have the internet available, as long as you have connected it to something with a DHCP server. Update to the latest software, etc. # Configuring and Running CRI-O First, kubernetes will require a few things for cri-o to work: (as root) run the following
modprobe overlay
modprobe br_netfilter

cat > /etc/sysctl.d/99-kubernetes-cri.conf <

Important! Also add overlay and br_netfilter to /etc/modules-load.d/modules.conf so that it's permanent.  I did not originally, rebooted, and then wondered why I was getting "/proc/sys/net/bridge/bridge-nf-call-iptables not found" errors when trying to run kubeadm!

Next up, install the CRI tool `crictl`

export VERSION="v1.19.0"
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz
sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
If you try it now, you'll see that CRI-O isn't running...
crictl info
FATA[0002] connect: connect endpoint 'unix:///var/run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded 
So fix that:
sudo systemctl start crio
sudo systemctl enable crio
CRI-O will be running at this point, but it needs a CNI (container networking interface)
sudo crictl info
{
  "status": {
    "conditions": [
      {
        "type": "RuntimeReady",
        "status": true,
        "reason": "",
        "message": ""
      },
      {
        "type": "NetworkReady",
        "status": false,
        "reason": "NetworkPluginNotReady",
        "message": "Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
      }
    ]
  }
}
We're going to get that when we install kubernetes, so let's skip ahead... But first, let's install OpenSSH server so that we can just complete the rest of these tasks
sudo apt install openssh-server
# Installing kubernetes Note: at the time of writing this post, kubernetes 1.20 had just been released. I will upgrade to that at some point in the future, but for now, I want 1.19 so that everything matches up (e.g. cri-o). You can run `apt list -a kubeadm` to see what's available, 1.19.4-00 was the latest in the 1.19 branch right now.
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

cat <


Make sure swap is turned off

 sudo swapoff -a 
Here is where we diverge based on whether we're creating the master node or a worker... IF MASTER... Take a look at the kubadm defaults `kubeadm config print init-defaults`
W1209 08:04:47.908176   66064 kubelet.go:200] cannot automatically set CgroupDriver when starting the Kubelet: cannot execute 'docker info -f {{.CgroupDriver}}': executable file not found in $PATH
W1209 08:04:47.914979   66064 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.2.3.4
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: kmaster
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}c
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}
We need to do some configuration before running `kubeadm`, see: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file Create a file: /etc/default/kubelet Paste in: KUBELET_EXTRA_ARGS=--feature-gates="AllAlpha=false,RunAsGroup=true" --container-runtime=remote --cgroup-driver=systemd --container-runtime-endpoint='unix:///var/run/crio/crio.sock' --runtime-request-timeout=5m Figure out what CIDR your CNI is using... cat /etc/cni/net.d/100-crio-bridge.conf (it's probably 10.85.0.0/16). Note, you cannot pass both --config and --pod-network-cidr as suggested by some other tutorials. You must use a ClusterConfiguration in the --config file (as below) Pass in the cgroup driver through an init file you will use with kubeadm (I used kubeadm-init.yaml)
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
networking:
  podSubnet: "10.85.0.0/16"

sudo kubeadmn --config=kubeadm-init.yaml IF WORKER... Don't forget to restart the kubelet.
systemctl daemon-reload && systemctl restart kubelet
We need to join to the existing cluster. Go to the master node and run:
kubeadm token create --print-join-command
Then take the output of that command and run it on the worker node. DNS!! Another thing that got me... I couldn't get external hostnames to resolve on the ubuntu hosts Turns out there were no dns servers listed. Edit /etc/systemd/resolved.conf on all nodes Uncomment the FallbackDNS line and set it to your favorite DNS resolver (e.g. 1.1.1.1 or 8.8.8.8) Restart the service:
service systemd-resolved restart
Great, now the external names resolve on the nodes. But the pods themselves cannot resolve external names! You may need to restart coreDNS to pick up the changes also.
kubectl rollout restart -n kube-system deployment coredns

Sunday, December 22, 2019

My complete steps to join a new Raspberry Pi 4 node running Ubuntu 19.10 to the cluster

Connect the micro-hdmi, keyboard, and power.

Wait for it to boot up.

Login. Default user/pass is ubuntu/ubuntu, but it will make you change it on first login.

sudo apt update
sudo apt upgrade

Change the hostname, edit /etc/hostname

Enable the boot cmdline groups, edit /boot/firmware/nobtcmd.txt and add cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1 to the end of the existing line (do not add newlines)

Reboot. Log in again.

Run ssh-keygen

From the master node:

copy over ssh key from master to the new node, just to make things easier:

ssh-copy-id nodehostname

copy over the node token from the master so that the new node can join the cluster:

scp /var/lib/rancher/k3s/server/node-token ubuntu@nodehostname:.

Now, back on the new node...

export K3S_TOKEN=$(cat node-token)
export K3S_URL=https://masterhostname:6443
curl -sfL https://get.k3s.io | sh -

Watch the logs, but the final message should be something like

systemd: Starting k3s-agent

Back on the master, run

sudo kubectl get nodes
And make sure the new one joined. That's it

Saturday, December 21, 2019

K3s on Raspberry Pi 4 4GB / Ubuntu 19.10 / Arm64

Picking this back up again, now that the USB bug with a >3GB of memory has been fixed in the latest version of Ubuntu 19.10.

Installing K3s is pretty easy, but there is one gotcha that isn't terribly well documented anywhere that I could find.

You need to enable cgroups for cpu and memory. To do that on Ubuntu 19.10 on a Raspberry Pi, edit /boot/firmware/nobtcmd.txt and add the following to the end of the existing line:

cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1

Reboot. Then install K3s

Friday, March 3, 2017

JSON pretty from inside vim

One liner to be able to format JSON into a readable (pretty) format from inside Vim.

:%!python -m json.tool

Saturday, January 30, 2016

Helping Apple with a missing button

Hi Apple,
It's ok, I get it... You're not trying to force Safari down our throats with the popup every couple days... it's just that you didn't have room to add "Don't ask me again" without ruining the aesthetic design.

I don't blame you, I mean look how nicely "Later" and "Try Now" fit together. I wouldn't want to add more options either.

Since Apple couldn't fit it in, here's how to fake the action of clicking on the missing "never ask me about switching to this beautifully designed browser" button. You'll have to open up terminal. Then type these three commands:

defaults write com.apple.coreservices.uiagent CSUIHasSafariBeenLaunched -bool YES
defaults write com.apple.coreservices.uiagent CSUIRecommendSafariNextNotificationDate -date 2020-01-01T00:00:00Z
defaults write com.apple.coreservices.uiagent CSUILastOSVersionWhereSafariRecommendationWasMade -float 10.99

That *should* take care of seeing the pop-up for the next few years...

Friday, November 13, 2015

updatedb on OSX

Mainly because I always forget, on OSX you can run the script the updates the locate db automatically, just run:
sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.locate.plist
(you only have to do it once)

Before you do, edit /System/Library/LaunchDaemons/com.apple.locate.plist Get rid of the following portion so that it runs daily instead of just once a week though.

                <key>Weekday</key>
                <integer>6</integer>

Monday, November 2, 2015

A refresher on what Java 7 brought...

Java 7 was released way back in 2011, but it's unfortunately not THAT uncommon for a large organization to get set in its ways (having found years ago, those battle-tested, magical, set of exact JVM tuning arguments that work for *their particular* application stack on a certain version of a 1.6 JVM) so that you don't get exposed to "newer" features. And to only get dragged kicking and screaming into JDK8 when Oracle deprecated certain versions.

So, what did Java 7 bring us? As developers, it seemed to be mostly about cleaning up syntax/getting rid of some of the boilerplate...

  • Strings in switch statements - sure, other languages had it for years, but you weren't able to switch on Strings until Java 7. Nice to have, not especially mind-blowing.
  • try with resources - This was a welcome addition - no longer did you have the super-awkward "declare your variable outside the block just so you have access to it in a second try/catch so you can close it..." e.g.
    Connection conn = null;
    try{
      conn = getConnectionFromSomewhere();
      // do work with the resource
    } catch (Exception e){
      // handle
    } finally {
      if (conn != null){
        try {
          conn.close();
        } catch (Exception someOtherExceptionYouProbablyIgnore) {
          // yah, you probably ignore this
        }
      }
    }
    
    Nope, now you can just do
    try(Connection conn = getConnectionFromSomewhere();){
      // do work with the resource
    } catch (Exception e) {
      // handle
    }
    You can also have your own objects work this way, look into java.lang.AutoClosable
  • Multi-catch - Another nice little cleanup, rather than listing out all the exceptions you want to handle (even if you handle some of them the same), you can now use the | pipe operator to union the exceptions you are handling. Instead of this:
    } catch (IOException ex) {
         logger.log(ex);
         throw ex;
    } catch (SQLException ex) {
         logger.log(ex);
         throw ex;
    }
    write this:
    } catch (IOException|SQLException ex) {
        logger.log(ex);
        throw ex;
    }
    
  • Binary literals - "0b" prefix, e.g. 0b10100111001 is 1337
  • left-to-right Improved type inference (<>) Removed the necessity of writing tripe like:
    Map<String, List<Integer>> m = new HashMap<String, List<Integer>>();
    simplifying it slightly to:
    Map<String, List<Integer>> m = new HashMap<>();
    (This seems slightly backwards to me, we're essentially specifying the details on the interface and waving our hands over the concrete implementation, instead of
    Map<> m = new HashMap<String, List<Integer>>(); // does not work
    but I'm not the designer - you must use the first version =)
  • Underscores in numeric literals What's easier to understand at a glance? Counting the zeros in 1000000000 or 1_000_000_000?
  • A whole slew of new APIs for NIO - notably, java.nio.Path and the WatchService, which allows you to be notified of changes to a path you're watching (there are a ton of applications for this)

Those are the big ones in my opinion. Did I miss any of your favorites?

Saturday, October 31, 2015

Changing default prefix keys for tmux

tmux uses Ctrl-b as the default command prefix, that's ok on Windows but less than desirable on a MacBook Pro. Turns out that it's super easy to remap to something easier to type, but it's a two-step process.

First, let's get some usage out of the vestigial 'caps lock' key. Go to 'System Preferences' -> 'Keyboard' -> 'Modifier Keys' and then change caps lock to do something (marginally more) useful, the Control key.

Now that that's out of the way, remap the prefix in tmux by editing (or creating) your ~/.tmux.conf file. We only need one line in there to remap the prefix to Ctrl-a (which are handily right next to each other):

set -g prefix C-a

Two things to note: first, changing the caps lock key to be control affects ALL applications on your Mac, if you're like me and never use caps lock, that's probably ok - just something to remember. Secondly, sometimes you might want to be able to send Ctrl-a to an application that you're using inside tmux. If you think you will, just add another line to your ~/.tmux.conf file:

bind C-a send-prefix

This will allow you to send Ctrl-a to the application running inside tmux by pressing it TWICE.

Friday, September 4, 2015

Compiling a Go app, that uses Cgo, to run on an Edison

[TLDR - Use a Docker container, install the 32-bit compiler chain and 32-bit dev libraries on the build machine, include the flags to enable CGO, set the target GOARCH, set ldflags to statically link]

So, originally I thought I'd be able to easily get my (trivial) Go app to cross-compile for the Edison (which is running a 32-bit linux variant) but I was quickly disabused of that notion. My app uses the `gopacket` library, which in turn uses C bindings (Cgo) to `libpcap-dev` to do the actual packet capture.

I had originally thought it was just a matter of adding "GOOS=linux GOARCH=386" to compile from my OSX box to target a 32-bit linux binary. It works fine for most apps, BUT it doesn't work for apps that are using Cgo. Ah, just forgot the "CGO_ENABLED=1" flag, right? Nope. That causes all sorts of different errors. Googling/Stack Overflow didn't really turn up anything helpful, but there's *probably* a way to do it. (there were a few projects out there, including gonative that seemed promising, but `gonative` only addresses Cgo-enabled versions of the stdlib packages, not if your project uses Cgo)

Rather than dig into the intricacies of cross-compiling on a Mac, I just pivoted to using a Docker container. I couldn't find an official Yocto linux container prebuilt, which is what the Edison runs, so I just went with a standard Ubuntu image.

root@ubuntu-docker-instance# go build listen.go 

Sweet, compiled. All set, that was easy. Let's just check the binary to make sure all is good with the world.

root@ubuntu-docker-instance# file listen
listen: ELF 64-bit LSB  executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1]=11b30b279fd6580392e72bac70a5be034e12b2a7, not stripped

Oops, dynamically linked. Let's fix that by setting the correct ld flags, and check it again.

root@ubuntu-docker-instance# go build --ldflags '-extldflags "-static"' listen.go 
root@ubuntu-docker-instance# file listen
listen: ELF 64-bit LSB  executable, x86-64, version 1 (GNU/Linux), statically linked, for GNU/Linux 2.6.24, BuildID[sha1]=8a5a63b0151c2cef02399d091ea7339b2ca1d30b, not stripped

Ok, statically linked now but it's (still) 64-bit. The Edison is 32-bit, which means we'll have to fix that... should be cake, just use the "GOARCH" flag...

root@ubuntu-docker-instance/util# GOARCH=386 go build --ldflags '-extldflags "-static"' identify.go 
# command-line-arguments
./identify.go:19: undefined: pcap.FindAllDevs
./identify.go:23: undefined: pcap.OpenLive
./identify.go:23: undefined: pcap.BlockForever

What?! Oh, yes, need to tell the compiler about the Cgo code... (CGO_ENABLED=1)

root@ubuntu-docker-instance/util# GOARCH=386 CGO_ENABLED=1 go build --ldflags '-extldflags "-static"' identify.go 
# runtime/cgo
In file included from /usr/include/errno.h:28:0,
                 from /usr/local/go/src/runtime/cgo/cgo.go:50:
/usr/include/features.h:374:25: fatal error: sys/cdefs.h: No such file or directory
 #  include 
                         ^
compilation terminated.

C'mon... now what? Some quick googling turns up the fact that you need the 32-bit gcc stuff, since the host architecture is 64-bit.

root@ubuntu-docker-instance/util# apt-get install libx32gcc-4.8-dev libc6-dev-i386

root@ubuntu-docker-instance/util# GOARCH=386 CGO_ENABLED=1 go build --ldflags '-extldflags "-static"' identify.go 
# github.com/google/gopacket/pcap
/usr/bin/ld: cannot find -lpcap
collect2: error: ld returned 1 exit status

Now what? Oh, although I have the 32-bit compiler chain, I DON'T have the 32-bit version of libpcap-dev. Let's fix that.

root@ubuntu-docker-instance/util# apt-get install libpcap-dev:i386
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Package libpcap-dev:i386 is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source

E: Package 'libpcap-dev:i386' has no installation candidate

Huh? furious googling ensues... Ok, add the i386 architecture and try again...

root@ubuntu-docker-instance/util# sudo dpkg --add-architecture i386
root@ubuntu-docker-instance/util# sudo apt-get update
root@ubuntu-docker-instance/util# apt-get install libpcap0.8-dev:i386

NOW we're cooking with fire. Let's give it one more try...

root@ubuntu-docker-instance/util# GOARCH=386 CGO_ENABLED=1 go build --ldflags '-extldflags "-static"' identify.go 

Looks promising, it's a 32-bit statically compiled binary at this point. No obvious errors.

identify: ELF 32-bit LSB executable, Intel 80386, version 1 (GNU/Linux), statically linked, for GNU/Linux 2.6.24, not stripped

SCP the binary onto the Edison and test it...

root@edison:~# ./identify -inf=wlan0
2015/09/04 14:11:02 Starting up...
2015/09/04 14:11:03 Listening for ARP packets on en0 to identify dash button.
Press the dash button and watch the screen. You will want the 'Source MAC'
2015/09/04 14:11:09 ARP packet, Source MAC[74:75:48:a4:59:a8], Destination MAC[ff:ff:ff:ff:ff:ff]
2015/09/04 14:11:24 ARP packet, Source MAC[74:75:48:29:a8:7c], Destination MAC[ff:ff:ff:ff:ff:ff]

SUCCESS!!!

Tuesday, September 1, 2015

Playing with Amazon Dash buttons

Amazon is experimenting with wireless buttons (for $5 each) called Dash Buttons. The intended usage seem to be for Prime members to stick the button near a product that needs periodic refilling, e.g. laundry soap, so that when you're almost out of the product, you press the button and it (almost) magically reorders for you.

These buttons are nifty little pieces of technology, especially for $5. Someone has already figured out how to use them for something other their intended usage - read his post on Medium, it's very well written. He has source code in Python for accomplishing the hack.

I wanted to take a slightly different route to accomplishing the same type of thing. I have a bunch of hardware laying around that would be fun to play around with, including an Intel Edison board. I could write my code in Go, cross-compile it on my Mac, and just SCP it onto the Edison. Sounded like a fun evening or two, so that's what I did and threw it up on github.

Tested the code on OSX and it works. Attempted to compile it for Linux, so I can deploy it onto the Edison, and compilation fails...

$ GOOS=linux GOARCH=386 go build identify.go
# command-line-arguments
./identify.go:19: undefined: pcap.FindAllDevs
./identify.go:23: undefined: pcap.OpenLive
./identify.go:23: undefined: pcap.BlockForever

I'm pretty sure it uses CGO and there are some details about cross-compiling that I'll need to figure out... but that will be for another post.

Wednesday, July 22, 2015

Getting RStudio to work with RWeka and the previous java 1.8 problem

So, from the previous post, installing Java 1.6 and recompiling rJava is enough to get everything working from a terminal window. But, if you start up RStudio and try to use RWeka (or other libraries using JNI, probably) you'll see an error message similar to:

Error : .onLoad failed in loadNamespace() for 'rJava', details:
  call: dyn.load(file, DLLpath = DLLpath, ...)
  error: unable to load shared object '/usr/local/Cellar/r/3.2.0_1/R.framework/Versions/3.2/Resources/library/rJava/libs/rJava.so':
  dlopen(/usr/local/Cellar/r/3.2.0_1/R.framework/Versions/3.2/Resources/library/rJava/libs/rJava.so, 6): Library not loaded: @rpath/libjvm.dylib
  Referenced from: /usr/local/Cellar/r/3.2.0_1/R.framework/Versions/3.2/Resources/library/rJava/libs/rJava.so
  Reason: image not found
Error: package or namespace load failed for ‘RWeka’

A workaround is to start RStudio from a terminal window by running the following line:

LD_LIBRARY_PATH=$(/usr/libexec/java_home)/jre/lib/server: open -a RStudio

Monday, July 20, 2015

Getting various R packages to work with Java 1.8 and rJava on OSX

If you attempt to install various R packages (e.g. RWeka) on OSX using a Java 1.8 runtime, you're likely to run into the following error message (even though you have a perfectly good installation of Java):
"No Java runtime present, requesting install.
ERROR: loading failed"

There are a ton of threads out there about this problem, but it seems to boil down to this unclosed (and untouched since 2014-01) JDK issue -> https://bugs.openjdk.java.net/browse/JDK-7131356

The "solution" can be found on the last comment on this rJava github issue: https://github.com/s-u/rJava/issues/37

  1. install JDK 1.6 from Apple (http://support.apple.com/kb/DL1572)
  2. open a new terminal window and make sure you're still using 1.8 - "java -version"
  3. run "sudo R CMD javareconf -n" from a terminal window
  4. start up R and install rJava again from source - install.packages("rJava", type="source")

Tuesday, May 12, 2015

Continuous integration: Rmd to md to html

I'm taking one of the Coursera courses right now on Reproducible Research and the first assignment requires you to build an Rmd file and convert it to html (using knitr). They do a really good job showing how to use RStudio to accomplish this task, but what if you don't want to bother with RStudio?

Here's (https://gist.github.com/slowteetoe/7b8426f567ac3df1d8f9) a simple bash script that watches an Rmd file. When you save that file, it runs knitr to convert it to markdown, and then converts the markdown to html and opens it in a browser... sorta a continuous integration for R markdown.

Couple caveats: you'll need to install the Ruby gem 'kicker', the r packages 'knitr' and 'markdown', and put this script somewhere in your path. It will also have to be made executable, 'chmod 755 knitr'. Also, it opens a different browser tab each time - but I couldn't figure out how to change this behavior and it was worth it (to me at least).

Thursday, April 16, 2015

R annoyances

R seems to be, hmm, quirky, from what I've seen of it so far... For instance, ddply has different behavior using the American vs. British spellings of "summarize". Seriously?!
> g <- ddply(m, c("QGroup","Income.Group"), summarize, All = length(CountryCode))
Error: argument "by" is missing, with no default

> g <- ddply(m, c("QGroup","Income.Group"), summarise, All = length(CountryCode))
Yes, that's right - "summarise" works fine but "summarize" apparently takes different arguments.