Fine-tuning a PCI-Passthrough vfio VM

Masking interrupt, shielding CPUs, and other techniques I have found useful for reducing latency and improving performance

Introduction

As a quick introduction, vfio is the name of the technology built in the linux kernel which allows to map io devices to kvm guests. The name is also slightly abused to refer to the use of said feature, typically to map discrete GPUs to Windows/OSX machines on linux hosts, which allows the user to run GPU-bound tasks in said OSes without the need for a dual-boot setup.

In short, this is what some crazy gamers like me do to play stuff without running Windows baremetal or go through the issues Proton still has.

This, however, is not free of challenges. Virtualization is a complex topic, specially when running latency-sensitive workloads such as gaming, when more than 16ms of time between frames is usually considered unacceptable.

In this article I will explore some of the techniques I have used and tested, which are less known than the usual CPU-pinning, Hyper-V Enlightments, and in general everything covered on the Archlinux wiki page about VFIO.

I repeat: This article considers the topics covered in the wiki page linked above as the bare minimum. If you intend to use this as a reference for tweaking your own setup, make sure you understand all the strategies discussed there.

And now, let’s jump to the action

As Long as I Can’t Do It Myself

The client didn’t want users to be able to download paid video content from their website.

Disclaimer I: These post includes examples on BAD practices such as a bit of “roll your own crypto” and “security by obscurity”. I DO NOT recommend to do this unless you understand all the downsides of this approach.

Disclaimer II: The approach followed here was chosen both as a temporary solution to the problem which needed to be developed in a short time window (these took me about 6 hours to get it working).

I’m developing a webpage for a client, which uses Firebase as backend. The owners upload videos to the server, which clients pay to watch. I was asked by the client to make these videos “as hard to download as possible”. Of course when I was asked to do this, the first thing I thought about was DRM. But it turns that DRM solutions are quite complex to implement, require custom code running on backend. They’re also quite expensive in terms of server CPU time. As this project started using Firebase (a choice I did not make), implementing custom and expensive backend logic wasn’t the most desirable approach.

After discussing this with the client and telling them a proper solution would take long to land in production, they told me something in the lines of “as long as I can’t do[wnload the videos] myself, it’s ok to me” (translation by me, the original was, in spanish, “que no lo pueda hacer yo”).

With this they meant someone without technical skills should not be able to download the video with some google-able process, like right-click -> download, or installing an addon. This made sense to me, as, in the end, anyone can just hit play and use a screen recording program to get the video ripped and then share it for free. So I start thinking about what could I do to prevent this.

Some requirements I set to myself were:

  • Avoid using non-standard technologies

  • Avoid using deprecated technologies (yes, I’m looking at you, Flash)

  • Make it painless for the user

  • Make it cross-platform (avoid silverlight & co.)

  • Quick development time

Basically I could sum all of these in: “Do it using HTML5 and javascript only”.

About the “don’t let the users download the video (easily)”, I mostly devised two things:

  1. It should not appear as a media element in the page (aka, no src=""), so browsers and addons can’t locate it.

  2. The file downloaded must not be the video file “as-is”, so in the case someone opens the dev tools panel, look at the network requests, and figure out the video url, can’t just download it and play.

And so I decided to take the following approach: Upload the video file encrypted (preferrably symmetrically), and use some JS magick to decrypt it on the fly and feed it to the browser.

This, however, doesn’t look like a trivial task, so let’s dig into it.

Reverse-engineering an Android app to get access to its HTTPS API

A commercial, privative Android app is suspected to use an HTTPS API to get the data it shows up. The ability to obtain this data is valuable to us, so we apply different reversing techniques to find out where this API is located and how to use it.

Disclaimer: For privacy (and maybe legal) reasons, we will not disclose identifying details of the app which was reverse engineered.

As mentioned on the description, this post describes a (successful) attempt to discover the source of the data an Android app uses, which would allow us to write scripts and harvest this data for our own benefits.

The first reasonable assumption we make is that this source of data is an HTTP(s) API, probably REST. To test this out, we create an AVD (Android Virtual Device), throw the APK into it, and see what kind of traffic produces:

$ /opt/android-sdk/tools/emulator -avd MarshmallowPutillax64
$ /opt/android-sdk/platform-tools/adb install /tmp/target.apk

We launch wireshark along wiht our target app and wait:

wireshark

As expected, most traffic is set via port 443, and a previous non-encrypted HTTP requests seems to hint that this later traffic is indeed HTTPS. It’s time to start playing with proxies.

Fortunately, the android emulator has native support for HTTP Proxies. This is, we can tell the emulator to transparently forward any HTTP(s) requests through a user-defined proxy. To accomplish this, we just launch the emulator with the proper CLI option:

$ /opt/android-sdk/tools/emulator -avd MarshmallowPutillax64 -http-proxy 127.0.0.1:8080

Of course, we have some kind of MITM-capable proxy on port 8080 of our machine. There are several options out there, like mitmproxy or Burp Suite. I’ll use the later, simply because I’m more used to it.

Of course this isn’t enough. The proxy will intercept HTTPs connections on the fly and generate a custom ca-signed certificate for each domain, but the system won’t trust these certificates. To bypass this restriction, we need to export the CA certificate the proxy uses, and add it to the Android system..

burpca

We can now adb push this file to the AVD and add it via the system settings:

$ /opt/android-sdk/platform-tools/adb push /tmp/ca.crt /sdcard

After adding it to the system, we can now try to access any site with the web browser, and the certificates will be seen as good. And of course, the traffic log will appear in our proxy software.

androidbrowser

Now it’s time to try with the app!

test post