Wednesday, February 17, 2010

Hugepages save the kittens

Imagine, we need to read and eval code from some untrusted source. This code may change lisp environment in uncertain way as expected behaviour, but it may also contain error, which stops execution in the middle. This is not what we want: environment should be changed completely or rolled back to previous state.

Naive approach to solve the problem is to fork() lisp image, execute untrusted code in the child (child will be the full clone of parent), return zero in the case of succes and non-zero value otherwise. Parent may check if it's safe to executes code in its environment this way . This is only few lines of code:

(defun rep (expr)
  (eval (read-from-string expr)))

(defun safe-eval (expr)
  (let ((pid (sb-posix:fork)))
    (cond
      ((plusp pid)
       (multiple-value-bind (pid status)
         (sb-posix:waitpid pid 0)
        (declare (ignore pid))
        (when (zerop status)
         (rep expr))))
      ((zerop pid)
       (handler-case
         (rep expr)
        (error ()
          (sb-ext:quit :unix-status 1)))
       (sb-ext:quit :unix-status 0))
      (t
       (error "fork failed~%")))))

It doesn't even matter if child's image completely crashes due to uber-invalid code or compiler's bugs, parent will survive and understand that the code is bad. Next question is: how expensive fork() operation is?

Linux kernel (I'm Linux-only guy) is enough smart not to copy all the process data one-to-one: it uses copy-on-write (COW) technique to separate memory pages of new process from respective pages of old process only at the moment of page changes. So, basically, kernel duplicates task structures for the new process, and that's all.

Unfortunately, modern lisps (like SBCL) creates huge image up to tens megabytes, and it may grow up to gigabytes. In-kernel VMA chains, which describes how much memory the process has and how this memory is organized in address space of process, may grow significantly. The cost of fork operation grows respectively (not to mention, overall system slowdowns as well). One of accessible and acceptable solution is to use huge pages in those programs, which manipulate with a large memory portions.

Normal page size is equal to 4 kilobytes on most of systems. However, modern hardware allows to set page size for part of memory to larger size. For example, x86-64 supports huge pages of size 2 megabytes.

I wrote a small test, which allocates portions of memory with normal and huge page size, and does fork(). Let see the results:


Gosh!... Fork of a process with normal-sized pages is soooo slow! And huge pages definitely rocks!

I wish SBCL built-in memory manager to work with hugepages...