UndoDB is unique in that it is able to record with low overheads and without restriction on CPU make/model, execution environment (e.g. AWS), and comes with a host of features, including support for shared memory, deferred recording, live reversible debugging, attaching to a running process, and many more.
No. UndoDB can be used from various frontends e.g. Emacs, Eclipse, DDD or from the command line. This means UndoDB can be integrated into your developers’ existing workflow, making it quick and easy to get up and running.
UndoDB works on any Linux distro with kernel 2.6 or later, on ARM or x86. UndoDB also supports Android (Native) on ARM.
The number of lines of source is not directly relevant to UndoDB. Instead, it’s the size of the resulting program and its data that matters. We have customers that use UndoDB on programs that consume more than 100GB of memory.
No – UndoDB is for user-mode code only.
During record mode, the effects of all non-deterministic operations, including all system calls, thread-switches, signals and shared memory accesses, are stored in an event log. During replay, non-deterministic events are not performed directly, but their effects are synthesized based on the contents of the event log.
Yes. However, when debugging using traditional tools, it’s often necessary to restart the program multiple times. UndoDB saves time because you can go backward and forward as often as you need without having to restart. For a further speedup it’s possible to defer the start of recording until later in the program’s execution.
No, replaying process is self-contained and does not re-execute system calls. From outside (except via the debugger), the process appears to have frozen at the moment the recording stopped.
No, you can’t go back in time to change history. The reason is that in replay mode your application is disconnected from the outside world. Consider a webserver sending a page on a socket. If you go back to mid-way through the page’s transmission, change something and attempt to begin a new recording from that point, the webserver would send the second half of the page twice, very likely confusing the other end of the socket.
Yes, although your debug experience will be the same as using regular debuggers on optimised code.
Yes, you can ship or deploy a Live Recorder enabled application without debug information. Load the recording and point at a version of the binary with the debug information to get full source-level debugging.
Live Recorder typically causes programs to experience a 2-3x slowdown. This can be more or less, depending on the individual situation.
Spatial requirements vary hugely depending on the program, but typically a few megabytes per second of recording.
Yes. Live Recorder uses a circular event log so old history is discarded. Due to the performance overheads however, most of our customers do not run their programs with Live Recorder always on, but have a way to enable or disable recording as required.
Yes. As long as both the record and replay machines are running Linux and use the same CPU architecture, it's fine. However, you can't replay ARM recordings on x86. Live Recorder is not a simulator, so your replay machine CPU must support all of the instructions used on the recording machine. E.g. if your program uses AVX instructions when the recording is made then you must replay on a machine with a CPU that has AVX support.
Live Recorder is enabled simply by calling the undodb_recording_start() function.
Yes. Just by calling the undodb_recording_stop() function.
The recording is saved to a file on the local filesystem. The recording is generated when you call the undodb_save() function, or, if so-configured, when the program terminates.
The recording contains the program starting state, all non-determinstic inputs, any debug information files and all libraries used by the program are all contained in the recording. It is completely self-contained.
No - UndoDB is licensed under a proprietary license.
UndoDB invokes either a separately installed version of GDB or an “aggregated” version of GDB and interacts with it via GDB’s documented interfaces (namely GDB’s Python API and Remote Serial Protocol). UndoDB does not modify, link to, or exchange complex internal data structures with GDB, and does not form a combined work with GDB.
For reasons of convenience and stability the UndoDB release distribution includes a version of GDB in object code form in an “aggregate”. This version of GDB is conveyed in accordance with section 6 of version 3 of the GNU General Public License.
Undo has received confirmation from the Free Software Foundation that it is compliant with the terms of the GPL.
Reverse functionality uses 1.21 Gigawatts. For more information see: Flux capacitor or contact Dr Emmett Brown at 1640 Riverside Drive, Hill Valley, CA, USA.
We exploit the natural determinism of computers. Computers are completely deterministic; except when they’re not. So these classes of non-determinism (for example: IO, scheduling, signals, shared memory and others) are captured. When you step back in time, what’s happening under the hood is we’re going back to a snapshot and playing that forward to exactly where it needs to be. We’re capturing the minimum that we need to be able to take you back to any point in time but you have full visibility, all your globals, all your locals, everything rolls back to what it was at that time.
Distributed systems are inherently very non-deterministic. With Undo today, you have to record each of the nodes in your distributed system and manage the replay of that. So it’s a manual process, but we are working towards a point where you can follow the data through the distributed system and be able to click ‘next’, going from one node to another.
It works beautifully. The generated code is what the program is doing, there’s nothing special in that sense as it’s just the data it’s writing. We have lots of customers who have lots of generated code because those are really nasty cases to debug. We don’t give you magic, it’s still like using GDB, you’re still stepping through and looking at the hex and all the disassembly of the code that’s been generated. The difference of course is with the time travel debugging it’s much more tractable to figure out, especially when the generated code causes memory corruption.