Passing secrets to restored snapshots in Features and Ideas

I need some API to pass secrets to servers restored from snapshots. I know there are startup scripts, but they only work for servers instantiated directly from OS images and I cannot use OS images directly.

As the API stands now, all servers instantiated from single snapshot share single ssh host key. Every server could generate new ssh host key upon first boot, but then my local scripts would have no way to verify the random host key nor any other way to verify identity of the server and thus no way to securely configure/specialize it.

I thought of abusing the email field in ssh key and fetching the fake shh key from metadata server (169.254.169.254), but I cannot test it, because sshkey/create API gives me 412 status code. Smuggling configuration data in email field of fake ssh key feels hacky anyway. Is there a free-form metadata field that I could set in server/create API and see it on the metadata server?

Comments

  • I've made the ssh key API work and I see the fake ssh key in the metadata, so there's a very hacky workaround. Any cleaner way to do this?
  • I haven't found any other way to pass secrets to restored snapshots. It looks like smuggling data in ssh keys is the only way to do it.

    I have now thoroughly tested it. SSH keys have a comment field at the end. It is usually an email address or username, but there are no restrictions on what it can contain. You can put there arbitrary data encoded with base64 (url-safe variant). There seems to be no size limit either. I have successfully uploaded 10KB comment in the SSH key.
  • That's sneaky!

    I'm not too familiar with the API, but it sounds like a useful addition.
  • edited July 2017
    This is exactly the problem that's been bugging me for last several days. They got user_data which can be set through the API: https://www.vultr.com/api/#server_set_user_data and supposedly can be accessed by cloud-init (through some method that's never really explained), but there is no way to read it through metadata API, even though it's a really obvious solution... Your fake SSH key method should work though if there's no alternative, thank you.

    edit: For that matter, how secure is metadata API? It's not a https url.
  • @TonyP The metadata API is secure without https: because the 169.254.x.x addresses are Link-Local and do not route over networks. In fact AWS/EC2 uses them for their instance metadata.
  • For anyone who wants to provide configuration information to Vultr instances I've created an open-source (permissive MIT license) to solve this. Basically you run one executable as a service which uses the Vultr API to get info about server instances. It can be bound to only listen on your 10.x.x.x private network. Then when any of your instances does a GET http://10.1.1.1 (or whatever your designated instance's IP is) then it will return the JSON for the information about just the server that made the request. This is determined by matching the connection client IP (either IPv4 or IPv6) with those listed in the Vultr API server list response. In particular, this service removes any sensitive information and as a crucial bonus can also include the userdata for the requesting instance if so configured. Source is in Go and a single file binary executable was made on Ubuntu 18.04 64-bit. Btw, QuickX is making several developer tools and services that might be of interest to this crowd. Please check https://quickx.app/ for new services as they get rolled out.

    https://github.com/quickx-app/vultrdata
Sign In or Register to comment.

Registration Required

A VULTR.com account is required to use the forum. Click here to sign in.

Quick Links