Hugg Herr showed the state of the art in bionics at TED, including a touching moment with a dancer first performance on stage after she got her mobility back.
I think there is great potential in the technology shown here, to speed up motor learning: if you ever did some kind of sport that requires good coordination, you know that practice makes you perfect. That’s because you have to build the neural pathways in the cerbellum and basal ganglia to learn the new movements, you can’t just learn them by observation because that stimulates another part of the brain, the temporal lobes.
It could have vast applications from faster physical rehabilitation to recreative uses in sports and dancing.
We live in interesting times!
Somewhat related: Switzerland will host the first the first olympics for bionically augmented people in 2016.
I am so happy that Google is unveiling more and more pieces of our infrastructure.
I am so tired of blogging about trivial things because I worked mostly in a world of its own and built things that you would never see… until now:
Every time you see a graph in Google Cloud Platform, you’re using software and infrastructure that my team runs.
It’s time to bring more and more of our amazing tools out of the chocolate factory.
Ganeti is a cluster management software that provides basic network services, virtual workstations and general Linux servers to Google employees for corporate use. It is open source and pretty solid. The first release was in 2007!
In a few points:
- It provide disk management, operating system installation, management of virtual machines.
- Supports live migration.
- It scales from 1 up to 150 physical machines per cluster.
Among other things, it runs debian.org (70 VMs, 5 servers) and grnet.gr (10 clusters, more than 6000 VMs), the Greek Research & Technology Network.
Geek trivia: several bits of it are written in Haskell.
I always hated this expression and I don’t fit in the companies who use it to describe their corporate culture, but I never realized why until I read this Wikipedia entry:
“Fun and action are the rule here, and employees take few risks, all with quick feedback; to succeed, the culture encourages them to maintain a high level of relatively low-risk activity.”
(page 108, Corporate Cultures book by Deal and Kennedy)
Can we use whitebox monitoring data to forecast cascading failures in distributed software?
It is winter in Switzerland, holiday time is when you dream interesting ideas while drinking Glühwein and reckless people who ski off-track get buried alive or cause giant avalanches.
What does an avalanche have to do with software? If you’re a novice, running your extra-lean startup “technical infrastructure” on a feeble single server, it is that post gone viral that turned a fledging success into a pile of fuming ashes. If you’re a seasoned guy, maybe a DevOps or an SRE, it’s that rapid-fire and sneaky Query of Death that commits software genocide, killing your nodes faster than they can recover.
Avalanches sound scary, but zoom out, repeat and something magical happens. Order from chaos appears.
It is time to rethink how we build compilers to make software more secure.
Symbolic execution is a way to analyze software behavior. It is used to explore a large set of parallel program paths at once and can lead to exploit that trigger vulnerabilities deep into the bowel of a piece of code.
I recently discovered Neil Gunther’s work on computer performance analysis and in particular his “Universal Scalability Law”, which links the relative capacity of a system:
to two term:
- a contention cost due to queuing effects caused by limited resources.
- a coherency cost to maintain shared state.
Using it, you can get the ideal number of workers for a given job:
I have yet to see how useful it could be in my real world: that is large scale, distributed systems.