My first patch landed in Firefox 19, and my final patch as an employee has landed in Nightly for Firefox 93.
I’ll be moving on to something new in a few weeks’ time, but for now, I’d just like to say this:
My time at Mozilla has made me into a better software developer, a better leader, and more importantly, a better person.
I’d like to thank all the Mozillians whom I have interacted with over the years for their contributions to making that happen.
I will continue to update this blog with catch-up posts describing my Mozilla work, though I am unsure what content I will be able to contribute beyond that. Time will tell!
Until next time…
]]>Here is an index of all the entries in this series:
During early 2019, Mozilla was working to port Firefox to run on the new AArch64 builds of Windows. At our December 2018 all-hands, I brought up the necessity of including the DLL Interceptor in our porting efforts. Since no deed goes unpunished, I was put in charge of doing the work! [I’m actually kidding here; this project was right up my alley and I was happy to do it! – Aaron]
Before continuing, you might want to review my previous entry describing the Great Interceptor Refactoring of 2018, as this post revisits some of the concepts introduced there.
Let us review some DLL Interceptor terminology:
On more than one occasion I had to field questions about why this work was even necessary for AArch64: there aren’t going to be many injected DLLs in a Win32 ecosystem running on a shiny new processor architecture! In fact, the DLL Interceptor is used for more than just facilitating the blocking of injected DLLs; we also use it for other purposes.
Not all of this work was done in one bug: some tasks were more urgent than others. I began this project by enumerating our extant uses of the interceptor to determine which instances were relevant to the new AArch64 port. I threw a record of each instance into a colour-coded spreadsheet, which proved to be very useful for tracking progress: Reds were “must fix” instances, yellows were “nice to have” instances, and greens were “fixed” instances. Coordinating with the milestones laid out by program management, I was able to assign each instance to a bucket which would help determine a total ordering for the various fixes. I landed the first set of changes in bug 1526383, and the second set in bug 1532470.
It was now time to sit down, download some AArch64 programming manuals, and take a look at what I was dealing with. While I have been messing around with x86 assembly since I was a teenager, my first exposure to RISC architectures was via the DLX architecture introduced by Hennessy and Patterson in their textbooks. While DLX was crafted specifically for educational purposes, it served for me as a great point of reference. When I was a student taking CS 241 at the University of Waterloo, we had to write a toy compiler that generated DLX code. That experience ended up saving me a lot of time when looking into AArch64! While the latter is definitely more sophisticated, I could clearly recognize analogs between the two architectures.
In some ways, targeting a RISC architecture greatly simplifies things: The DLL Interceptor only needs to concern itself with a small subset of the AArch64 instruction set: loads and branches. In fact, the DLL Interceptor’s AArch64 disassembler only looks for nine distinct instructions! As a bonus, since the instruction length is fixed, we can easily copy over verbatim any instructions that are not loads or branches!
On the other hand, one thing that increased complexity of the port is that some branch instructions to relative addresses have maximum offsets. If we must branch farther than that maximum, we must take alternate measures. For example, in AArch64, an unconditional branch with an immediate offset must land in the range of ±128 MiB from the current program counter.
Why is this a problem, you ask? Well, Detours-style interception must overwrite
the first several instructions of the target function. To write an absolute jump,
we require at least 16 bytes: 4 for an LDR
instruction, 4 for a BR
instruction, and another 8 for the 64-bit absolute branch target address.
Unfortunately, target functions may be really short! Some of the target functions that we need to patch consist only of a single 4-byte instruction!
In this case, our only option for patching the target is to use an immediate B
instruction, but that only works if our hook function falls within that ±128MiB
limit. If it does not, we need to construct a veneer. A veneer is a special
trampoline whose location falls within the target range of a branch instruction.
Its sole purpose is to provide an unconditional jump to the “real” desired
branch target that lies outside of the range of the original branch. Using
veneers, we can successfully hook a target function even if it is only one
instruction (ie, 4 bytes) in length, and the hook function lies more than 128MiB
away from it. The AArch64 Procedure Call Standard specifies X16
as a volatile
register that is explicitly intended for use by veneers: veneers load an
absolute target address into X16
(without needing to worry about whether or
not they’re clobbering anything), and then unconditionally jump to it.
To determine how many instructions the target function has for us to work with, we make two passes over the target function’s code. The first pass simply counts how many instructions are available for patching (up to the 4 instruction maximum needed for absolute branches; we don’t really care beyond that).
The second pass actually populates the trampoline, builds the veneer (if necessary), and patches the target function.
Since the DLL interceptor is already well-equipped to build trampolines, it did not take much effort to add support for constructing veneers. However, where to write out a veneer is just as important as what to write to a veneer.
Recall that we need our veneer to reside within ±128 MiB of an immediate branch. Therefore, we need to be able to exercise some control over where the trampoline memory for veneers is allocated. Until this point, our trampoline allocator had no need to care about this; I had to add this capability.
Firstly, I needed to make the MMPolicy
classes range-aware: we need to be able
to allocate trampoline space within acceptable distances from branch instructions.
Consider that, as described above, a branch instruction may have limits on the extents of its target. As data, this is easily formatted as a pivot (ie, the PC at the location where the branch instruction is encoutered), and a maximum distance in either direction from that pivot.
On the other hand, range-constrained memory allocation tends to work in terms
of lower and upper bounds. I wrote a conversion method, MMPolicyBase::SpanFromPivotAndDistance
,
to convert between the two formats. In addition to format conversion, this method
also constrains resulting bounds such that they are above the 1MiB mark of the
process’ address space (to avoid reserving memory in VM regions that are
sensitive to compatibility concerns), as well as below the maximum allowable
user-mode VM address.
Another issue with range-aware VM allocation is determining the location, within
the allowable range, for the actual VM reservation. Ideally we would like the
kernel’s memory manager to choose the best location for us: its holistic view of
existing VM layout (not to mention ASLR) across all processes will provide
superior VM reservations. On the other hand, the Win32 APIs that facilitate this
are specific to Windows 10. When available, MMPolicyInProcess
uses VirtualAlloc2
and MMPolicyOutOfProcess
uses MapViewOfFile3
.
When we’re running on Windows versions where those APIs are not yet available,
we need to fall back to finding and reserving our own range. The
MMPolicyBase::FindRegion
method handles this for us.
All of this logic is wrapped up in the MMPolicyBase::Reserve
method. In
addition to the desired VM size and range, the method also accepts two functors
that wrap the OS APIs for reserving VM. Reserve
uses those functors when
available, otherwise it falls back to FindRegion
to manually locate a suitable
reservation.
Now that our memory management primatives were range-aware, I needed to shift my focus over to our VM sharing policies.
One impetus for the Great Interceptor Refactoring was to enable separate
Interceptor instances to share a unified pool of VM for trampoline memory.
To make this range-aware, I needed to make some additional changes to
VMSharingPolicyShared
. It would no longer be sufficient to assume that we
could just share a single block of trampoline VM — we now needed to make the
shared VM policy capable of potentially allocating multiple blocks of VM.
VMSharingPolicyShared
now contains a mapping of ranges to VM blocks. If we
request a reservation which an existing block satisfies, we re-use that block.
On the other hand, if we require a range that is yet unsatisfied, then we need to
allocate a new one. I admit that I kind of half-assed the implementation of the
data structure we use for the mapping; I was too lazy to implement a fully-fledged
interval tree. The current implementation is probably “good enough,” however
it’s probably worth fixing at some point.
Finally, I added a new generic class, TrampolinePool
, that acts as an
abstraction of a reserved block of VM address space. The main interceptor code
requests a pool by calling the VM sharing policy’s Reserve
method, then it
uses the pool to retrieve new Trampoline
instances to be populated.
It is much simpler to generate trampolines for AArch64 than it is for x86(-64).
The most noteworthy addition to the Trampoline
class is the WriteLoadLiteral
method, which writes an absolute address into the trampoline’s literal pool,
followed by writing an LDR
instruction referencing that literal into the
trampoline.
Thanks for reading! Coming up next time: My Untrusted Modules Opus.
]]>Yes, you are reading the dates correctly: I am posting this over two years after I began this series. I am trying to get caught up on documenting my past work!
Given that the launcher process completely changes how our Win32 Firefox builds start, I needed to update both our CI harnesses, as well as the launcher process itself. I didn’t do much that was particularly noteworthy from a technical standpoint, but I will mention some important points:
During normal use, the launcher process usually exits immediately after the browser process is confirmed to have started. This was a deliberate design decision that I made. Having the launcher process wait for the browser process to terminate would not do any harm, however I did not want the launcher process hanging around in Task Manager and being misunderstood by users who are checking their browser’s resource usage.
On the other hand, such a design completely breaks scripts that expect to start
Firefox and be able to synchronously wait for the browser to exit before
continuing! Clearly I needed to provide an opt-in for the latter case, so I added
the --wait-for-browser
command-line option. The launcher process also implicitly
enables this mode under a few other scenarios.
Secondly, there is the issue of debugging. Developers were previously used to
attaching to the first firefox.exe
process they see and expecting to be debugging
the browser process. With the launcher process enabled by default, this is no
longer the case.
There are few options here:
-o
command-line flag,
or use the Debug child processes also
checkbox in the GUI;MOZ_DEBUG_BROWSER_PAUSE
environment variable, which
allows developers to set a timeout (in seconds) for the browser process to
print its pid to stdout
and wait for a debugger attachment.As I have alluded to in previous posts, I needed to measure the effect of adding
an additional process to the critical path of Firefox startup. Since in-process
testing will not work in this case, I needed to use something that could provide
a holistic view across both launcher and browser processes. I decided to enhance
our existing xperf
suite in Talos to support my use case.
I already had prior experience with xperf
; I spent a significant part of 2013
working with Joel Maher to put the xperf
Talos suite into production. I also
knew that the existing code was not sufficiently generic to be able to handle my
use case.
I threw together a rudimentary analysis framework
for working with CSV-exported xperf data. Then, after Joel’s review, I vendored
it into mozilla-central
and used it to construct an analysis for startup time.
[While a more thorough discussion of this framework is definitely warranted, I
also feel that it is tangential to the discussion at hand; I’ll write a dedicated
blog entry about this topic in the future. – Aaron]
In essence, the analysis considers the following facts when processing an xperf recording:
firefox.exe
process that runs;For our analysis, we needed to do the following:
firefox.exe
process being created;firefox.exe
process;This block of code demonstrates how that analysis is specified using my analyzer framework.
Overall, these test results were quite positive. We saw a very slight but imperceptible increase in startup time on machines with solid-state drives, however the security benefits from the launcher process outweigh this very small regression.
Most interestingly, we saw a signficant improvement in startup time on Windows
10 machines with magnetic hard disks! As I mentioned in Q2 Part 3, I believe
this improvement is due to reduced hard disk seeking thanks to the launcher
process forcing \windows\system32
to the front of the dynamic linker’s search
path.
By Q3 I had the launcher process in a state where it was built by default into Firefox, but it was still opt-in. As I have written previously, we needed the launcher process to gracefully fail even without having the benefit of various Gecko services such as preferences and the crash reporter.
Firstly, I created a new class, WindowsError
,
that encapsulates all types of Windows error codes. As an aside, I would strongly
encourage all Gecko developers who are writing new code that invokes Windows APIs
to use this class in your error handling.
WindowsError
is currently able to store Win32 DWORD
error codes, NTSTATUS
error codes, and HRESULT
error codes. Internally the code is stored as an
HRESULT
, since that type has encodings to support the other two. WindowsError
also provides a method to convert its error code to a localized string for
human-readable output.
As for the launcher process itself, nearly every function in the launcher
process returns a mozilla::Result
-based type. In case of error, we return a
LauncherResult
, which [as of 2018; this has changed more recently – Aaron]
is a structure containing the error’s source file, line number, and WindowsError
describing the failure.
While all Result
s in the launcher process may be indicating a successful
start, we may not yet be out of the woods! Consider the possibility that the
various interventions taken by the launcher process might have somehow impaired
the browser process’ ability to start!
To deal with this situation, the launcher process and the browser process share code that tracks whether both processes successfully started in sequence.
When the launcher process is started, it checks information recorded about the previous run. If the browser process previously failed to start correctly, the launcher process disables itself and proceeds to start the browser process without any of its typical interventions.
Once the browser has successfully started, it reflects the launcher process
state into telemetry, preferences, and about:support
.
Future attempts to start Firefox will bypass the launcher process until the next time the installation’s binaries are updated, at which point we reset and attempt once again to start with the launcher process. We do this in the hope that whatever was failing in version n might be fixed in version n + 1.
Note that this update behaviour implies that there is no way to forcibly and permanently disable the launcher process. This is by design: the error detection feature is designed to prevent the browser from becoming unusable, not to provide configurability. The launcher process is a security feature and not something that we should want users adjusting any more than we would want users to be disabling the capability system or some other important security mitigation. In fact, my original roadmap for InjectEject called for eventually removing the failure detection code if the launcher failure rate ever reached zero.
The pref reflection built into the failure detection system is bi-directional. This allowed us to ship a release where we ran a study with a fraction of users running with the launcher process enabled by default.
Once we rolled out the launcher process at 100%, this pref also served as a useful “emergency kill switch” that we could have flipped if necessary.
Fortunately our experiments were successful and we rolled the launcher process out to release at 100% without ever needing the kill switch!
At this point, this pref should probably be removed, as we no longer need nor want to control launcher process deployment in this way.
When telemetry is enabled, the launcher process is able to convert its
LauncherResult
into a ping which is sent in the background by ping-sender
.
When telemetry is disabled, we perform a last-ditch effort to surface the error
by logging details about the LauncherResult
failure in the Windows Event Log.
Thanks for reading! This concludes my 2018 Roundup series! There is so much more work from 2018 that I did for this project that I wish I could discuss, but for security reasons I must refrain. Nonetheless, I hope you enjoyed this series. Stay tuned for more roundups in the future!
]]>Yes, you are reading the dates correctly: I am posting this nearly two years after I began this series. I am trying to get caught up on documenting my past work!
Once I had landed the skeletal implementation of the launcher process, it was time to start making it do useful things.
[For an overview of Windows integrity levels, check out this MSDN page – Aaron]
Since Windows Vista, security tokens for standard users have run at a medium integrity level (IL) by default.
When UAC is enabled, members of the Administrators
group also run as a standard user with a medium IL, with
the additional ability of being able to “elevate” themselves to a high IL. When UAC is disabled, an administrator
receives a token that always runs at the high integrity level.
Running a process at a high IL is something that is not to be taken lightly: at that level, the process may alter system settings and access files that would otherwise be restricted by the OS.
While our sandboxed content processes always run at a low IL, I believed that defense-in-depth called for ensuring that the browser process did not run at a high IL. In particular, I was concerned about cases where elevation might be accidental. Consider, for example, a hypothetical scenario where a system administrator is running two open command prompts, one elevated and one not, and they accidentally start Firefox from the one that is elevated.
This was a perfect use case for the launcher process: it detects whether it is running at high IL, and if so, it launches the browser with medium integrity.
Unfortunately some users prefer to configure their accounts to run at all times as Administrator
with high integrity!
This is terrible idea from a security perspective, but it is what it is; in my experience, most users who
run with this configuration do so deliberately, and they have no interest in being lectured about it.
Unfortunately, users running under this account configuration will experience side-effects of the Firefox browser process running at medium IL. Specifically, a medium IL process is unable to initiate IPC connections with a process running at a higher IL. This will break features such as drag-and-drop, since even the administrator’s shell processes are running at a higher IL than Firefox.
Being acutely aware of this issue, I included an escape hatch for these users: I implemented a command line option that prevents the launcher process from de-elevating when running with a high IL. I hate that I needed to do this, but moral suasion was not going to be an effective technique for solving this problem.
Another tool that the launcher process enables us to utilize is process mitigation options. Introduced in Windows 8, the kernel provides several opt-in flags that allows us to add prophylactic policies to our processes in an effort to harden them against attacks.
Additional flags have been added over time, so we must be careful to only set flags that are supported by the version of Windows on which we’re running.
We could have set some of these policies by calling the
SetProcessMitigationPolicy
API.
Unfortunately this API is designed for a process to use on itself once it is already running. This implies that there
is a window of time between process creation and the time that the process enables its mitigations where an attack could occur.
Fortunately, Windows provides a second avenue for setting process mitigation flags: These flags may be set as part of
an attribute list in the STARTUPINFOEX
structure that we pass into CreateProcess
.
Perhaps you can now see where I am going with this: The launcher process enables us to specify process mitigation flags for the browser process at the time of browser process creation, thus preventing the aforementioned window of opportunity for attacks to occur!
While there are other flags that we could support in the future, the initial mitigation policy that I added was the
PROCESS_CREATION_MITIGATION_POLICY_IMAGE_LOAD_PREFER_SYSTEM32_ALWAYS_ON
flag. [Note that I am only discussing flags applied to the browser process; sandboxed processes receive additional mitigations. – Aaron]
This flag forces the Windows loader to always use the Windows system32
directory as the first directory in its search path,
which prevents library preload attacks. Using this mitigation also gave us an unexpected performance gain on devices with
magnetic hard drives: most of our DLL dependencies are either loaded using absolute paths, or reside in system32
. With
system32
at the front of the loader’s search path, the resulting reduction in hard disk seek times produced a slight but
meaningful decrease in browser startup time! How I made these measurements is addressed in a future post.
This concludes the Q2 topics that I wanted to discuss. Thanks for reading! Coming up in H2: Preparing to Enable the Launcher Process by Default.
]]>Yes, you are reading the dates correctly: I am posting this nearly two years after I began this series. I am trying to get caught up on documenting my past work!
One of the things I added to Firefox for Windows was a new process called the “launcher process.” “Bootstrap process” would be a better name, but we already used the term “bootstrap” for our XPCOM initialization code. Instead of overloading that term and adding potential confusion, I opted for using “launcher process” instead.
The launcher process is intended to be the first process that runs when the user starts Firefox. Its sole purpose is to create the “real” browser process in a suspended state, set various attributes on the browser process, resume the browser process, and then self-terminate.
In bug 1454745 I implemented an initial skeletal (and opt-in) implementation of the launcher process.
This seems like pretty straightforward code, right? Naïvely, one could just rip a CreateProcess
sample off of MSDN and call it day. The actual launcher process implementation is more complicated than
that, for reasons that I will outline in the following sections.
firefox.exe
I wanted the launcher process to exist as a special “mode” of firefox.exe
, as opposed to a distinct
executable.
By definition, the launcher process lies on the critical path to browser startup. I needed to be very conscious of how we affect overall browser startup time.
Since the launcher process is built into firefox.exe
, I needed to examine that executable’s existing
dependencies to ensure that it is not loading any dependent libraries that are not actually needed
by the launcher process. Other than the essential Win32 DLLs kernel32.dll
and advapi32.dll
(and their
dependencies), I did not want anything else to load. In particular, I wanted to avoid loading user32.dll
and/or gdi32.dll
, as this would trigger the initialization of Windows’ GUI facilities, which would be a
huge performance killer. For that reason, most browser-mode library dependencies of firefox.exe
are either delay-loaded or are explicitly loaded via LoadLibrary
.
We wanted the launcher process to both respect Firefox’s safe mode, as well as alter its behaviour as necessary when safe mode is requested.
There are multiple mechanisms used by Firefox to detect safe mode. The launcher process detects
all of them except for one: Testing whether the user is holding the shift key. Retrieving keyboard
state would trigger loading of user32.dll
, which would harm performance as I described above.
This is not too severe an issue in practice: The browser process itself would still detect the shift key. Furthermore, while the launcher process may in theory alter its behaviour depending on whether or not safe mode is requested, none of its behaviour changes are significant enough to materially affect the browser’s ability to start in safe mode.
Also note that, for serious cases where the browser is repeatedly unable to start, the browser triggers a restart in safe mode via environment variable, which is a mechanism that the launcher process honours.
We wanted the launcher process to behave well with respect to automated testing.
The skeletal launcher process that I landed in Q2 included code to pass its console handles on to the browser process, but there was more work necessary to completely handle this case. These capabilities were not yet an issue because the launcher process was opt-in at the time.
We wanted the launcher process to gracefully handle failures even though, also by definition, it does not have access to facilities that internal Gecko code has, such as preferences and the crash reporter.
The skeletal launcher process that I landed in Q2 did not yet utilize any special error handling code, but this was also not yet an issue because the launcher process was opt-in at this point.
Thanks for reading! Coming up in Q2, Part 3: Fleshing Out the Launcher Process
]]>