gccemacs

Table of Contents

Update 8

<2020-05-16 Sat>

Almost two months passed since I wrote the last update but really a lot happened.

Apparently the ELS paper and its presentation gave some exposure to the project and with that I got quite some feedback! As a consequence a lot of small fixing and improvements happened in addition to the following points:

32 bit support

I ported and tested the branch on X86_32 for both wide-int and non wide-int configuration. This implies that the back-end can now generate code with the tag bits in MSB or LSB position according to the configuration. Still I managed to boot only with NATIVE_FAST_BOOT or at comp-speed 0 due to the 2GB addressable memory limit that we still violate (See Optimizing the build).

Lazy documentation load

A mechanism for lazy loading the function documentation from the .eln files is now in place. This allow for reducing the number of unnecessary objects that the Garbage Collector has to traverse.

Optimizing the build

Reworking the code generation for 32bit I've measured a reduction of the compilation time of ~15%.

Following that I received an excelent report from Kévin Le Gouguec on memory consumption and compile-time analyzing the compilation process.

As follow-up on that now the leim/ sub-folder is by default not compiled. Being these files data and not code it should not affect performance.

The overall compile time is then reduced respect to the original one by a factor two. I can now build on my compile-box a full Emacs at speed 2 in ~47min, this to be compared with the original ~100min on the same machine for the branch at the state of Update 7.

The maximum memory used for the build (single thread speed 2) now peaks around 3GB while compiling Org and as a consequence is still out of the addressable limit for 32bit systems. This will be a subject of optimization for the future.

Operating system support

Free BSD

I had a run on Free BSD on X86_64 and it worked out.

Mac OS

There has been quite some discussion on the web on how to build for Mac Os. This is reported to work fine and here are some build instructions (bootstrap at speed 3 is not recommended), here the original discussion.

Windows

A series of patches has been posted for Windows by Nicolas Bértolo so work is ongoing and Windows support is to come!

New customizes

Is now possible to selectively exclude files from being native compiled by deferred compilation using comp-deferred-compilation-black-list.

Similarly but for bootstrap comp-bootstrap-black-list has been added.

Lambdas compilation

This is really the big boy that kept me busy for the last almost 3 weeks.

Now the native compiler is finally compiling also anonymous lambdas.

Before pushing the branch I tested it at my best, I hope there are no regressions. Also I still have to complete the tuning of the compiler to fully take advantage of that but this is a major improvement.

I'd like to come back on this with a post to discuss it better because it should have measurable performance implications.

Bringing GNU Emacs to Native Code - ELS2020 presentation

<2020-04-29 Wed>

This is the recording of the presentation I gave for the European Lisp Symposium 2020 about the proceeding titled "Bringing GNU Emacs to Native Code".

You can find it for download here in OGV file video format or here for streaming on toobnix.org (a PeerTube instance).

The slides are here for download.

Bringing GNU Emacs to Native Code - proceeding

<2020-04-05 Sun>

I very pleased to announce that the proceeding Luca Nassi, Nicola Manca and me worked on about this project has been accepted for the European Lisp Symposium 2020.

Pdf here https://doi.org/10.5281/zenodo.3736363 (direct link).

Update 7

<2020-03-23 Mon>

Native compile async update

The number of parallel compilation threads used by native-compile-async is now controlled by the comp-async-jobs-number customize.

This uses by default half of the available CPUs (or at least one) or can be tuned manually.

Is possible to change dynamically this while compiling and have the system converging to the desired value.

Deferred compilation

When comp-deferred-compilation is non nil and a lexically bound .elc defining functions is loaded then an asyncronous native compilation is queued for it. When this is finished function definitions in the compilation unit are automagically swapped for the native one.

This works similarly to an asyncronous jit compiler but still retains the advantage of being able to save to disk. Simply reloading the compiled output at the next startup saves from unnecessaries recompilations.

This is mechanism is general and works very well for handling transparently package compilations.

Fast bootstrap

Starting from the observation that native compiling all Emacs can be overkill and takes a long time I've added to the build system the possibility to native compile exclusively the compilation unit used for producing the dumped Emacs image. All other lisp files in this case are byte-compiled conventionally.

It can be used as follow:

make NATIVE_FAST_BOOT=1

The compile time for a bootstrap at speed 2 in this case goes down one magnitude order from ~1000m to ~100m CPU time (to be divided by the machine parallelism used).

The produced Emacs if used with comp-deferred-compilation enabled will then native compile only the used files while having the system immediately already usable. Essentially Emacs will asynchronously compile and load itself while already running.

In case you want to try this remember that Emacs will compile the file being loaded only when comp-deferred-compilation is non nil, so make sure this is set very early into your .emacs.

CI integration

On the Emacs CI for every push we now test for bootstrap the stock configuraiton and the native compiled one at speed 0 1 2 (1 appears to be broken). Thanks Michael for suggesting this and explaining how to configure it.

Others

Received the first patch!! Thanks Adam.

Somebody apparently had it running on Windows 10 hacking libgccjit! (Chinese):

http://www.albertzhou.net/blog/2020/01/emacs-native-comp.html

OSX apparently works but only speed 0 for now:

https://gist.github.com/mikroskeem/0a5c909c1880408adf732ceba6d3f9ab

I believe next is more debugging and start thinking about make install.

Update 6

<2020-03-04 Wed>

Better memory usage

A better allocation strategy for relocated objects is now in place. This should benefit memory usage and (as a consequence) GC time.

Multi-arch multi-conf support

Now different Emacs configurations and/or architectures can build on the same Lisp code base.

eln files are compiled in a specific sub-folders taking in account host architecture and current Emacs configuration to disambiguate possible incompatibilities.

As example a file .../foo.el will compile into something like .../x86_64-pc-linux-gnu-12383a81d4f33a87/foo.eln.

I've updated the load and 'go to definition' mechanisms to support this, so should be totally transparent to the user.

The same idea applies to all compilations meaning main build process as packages when gets built.

Optimize qualities

Optimize qualities (read compilation flags) are now stored in every eln file. Is now possible to inspect them as:

(subr-native-comp-unit (symbol-function 'org-mode))
#<native compilation unit: .../emacs/lisp/org/x86_64-pc-linux-gnu-12383a81d4f33a87/org.eln ((comp-speed . 2) (comp-debug . 0))>

Docker image

I've made a docker image for people intrested in trying this branch out without having to compile and/or install Emacs and libgccjit.

https://hub.docker.com/r/andreacorallo/emacs-nativecomp

I'll try to keep it updated. Also I plan to setup a small CI to test that the latest GCC does not break with our build.

Update 5

<2020-02-16 Sun>

New function frame layout

The variables layout in the generated code has been redesigned. Now every call by reference gets its own unique array to be used by the arguments. This should generate slightly better code limiting spilling.

Doese not impact on compilation time, but produced binaries (as expected) are slightly smaller.

speed 2 stable! New default

After some interesting bug fixing is now 2 weeks that a colleague and me are using for everyday production a native compiled Emacs boostrapped at speed 2. So far we have seen no crashes.

I consider having speed 2 stable a very major achievement.

As a consequence with the today push I'm setting speed 2 as default. This brings up compilation time to ~7h CPU time on my laptop (to be divided by the number of CPUs) for a boostrap starting from a clean repository.

I'll look in the future how to mitigate that but is important to test with optimizations engaged.

Self TCO

Also a pass to perform self Tail Call Optimization has been added. This is for now active only at speed 3.

Traditionally Emacs Lisp does not perform TCO. This pass should give more support to the programmer if a functional programming style is adopted.

Update 4

<2020-01-01 Wed>

As of today gccemacs no longer exists as independent project and its codebase is developed on the official Emacs repository git://git.sv.gnu.org/emacs.git under the feature/native-comp branch.

To bootstrap it:

./autogen.sh
./configure --with-nativecomp
make -JX

(substitute X with the number of your cpus).

All of this is just exciting!

Update 3

<2019-12-31 Tue>

Here are some notes on the implementation of the compilation process and load mechanism for a simple compilation unit test.el with the following content:

;;;  -*- lexical-binding: t -*-

(defun foo (x)
  (let ((i 0))
    (while (< i x)
      (bar i)
      (message "i is: %d" i)
      (setq i (1+ i)))))

Native compilation process internals (an example)

Compilation is invoked with:

(native-compile-async "/path/to/test.el")

In this case the following compilation parameters has been used:

  • comp-speed 2
  • comp-debug 1
  • comp-verbose 3

Note: all the dumped data-structures presented below have been collected from the *Native-compile-Log* buffer that was created because of the comp-verbose value.

The compilation goes through the following passes:

comp-spill-lap (comp.el:452)

comp-spill-lap will first run the conventional byte-compiler. This has been modified to accumulate all top level forms into byte-to-native-top-level-forms.

The function foo will be hold in an instance of structure comp-func. This has various slots but the one we are more interested in is the LAP one. Here the its content:

(byte-constant 0 . 0)
(TAG 1 . 2)
(byte-dup)
(byte-stack-ref . 2)
(byte-lss . 0)
(byte-goto-if-nil-else-pop TAG 23 . 3)
(byte-constant bar . 1)
(byte-stack-ref . 1)
(byte-call . 1)
(byte-discard)
(byte-constant message . 2)
(byte-constant "i is: %d" . 3)
(byte-stack-ref . 2)
(byte-call . 2)
(byte-discard)
(byte-dup)
(byte-add1 . 0)
(byte-stack-set . 1)
(byte-goto TAG 1 . 2)
(TAG 23 . 3)
(byte-return . 0)

This is effectively the main input for the function compilation process.

comp-limplify (comp.el:1208)

Next pass is comp-limplify, its main responsibility is to generate the LIMPLE representation. After this transformation the function will be a collection of basic blocks each of these composed by LIMPLE insns.

The format of every LIMPLE insn is a list (operator operands) where operands can be instances of comp-mvar (comp.el:276) or directly Lisp objects with a meaning that depends on the operator itself.

  • Main LIMPLE operands

    Here's a quick presentation of some LIMPLE operands.

    • (set dst src)

      Copy the content of the slot represented by the m-var src into the slot represented by m-var b.

    • (setimm dst imm)

      Similar to the previous one but imm is a Lisp object known at compile time.

    • (jump bb)

      Unconditional jump to basic block bb

    • (cond-jump a b bb_1 bb_2)

      Conditional jump to bb_1 or bb_2 after having compared a and b.

    • (call f a b ...)

      Call subr f with m-vars a b etc as parameters.

    • (callref f a b ...)

      Same as before but for subrs with signature C (ptrdiff_t nargs, Lisp_Object *args).

    • (comment str)

      Include annotation str as comment into the .eln debug symbols (comp-debug > 0).

    • (return a)

      Return using as return value the content of the slot represented by a.

    • (phi dst src1 src2 ... srcn)

      Propagates properties of m-vars src1srcn into m-var dst.

    Every comp-func structure has the collection of basic blocks into the blocks slot.

    All functions are kept into the compilation context comp-ctxt within the funcs-h slot.

    This is the representation of LIMPLE for the compilation unit after comp-limplify has run:

    Function: foo
    
    <entry>
    (comment "Lisp function: foo")
    (set-par-to-local #s(comp-mvar 0 nil nil nil nil nil) 0)
    (jump bb_0)
    <bb_0>
    (comment "LAP op byte-constant")
    (setimm #s(comp-mvar 1 nil nil nil nil nil) 0 0)
    (jump bb_1)
    <bb_1>
    (comment "LAP TAG 1")
    (comment "LAP op byte-dup")
    (set #s(comp-mvar 2 nil nil nil nil nil) #s(comp-mvar 1 nil nil nil nil nil))
    (comment "LAP op byte-stack-ref")
    (set #s(comp-mvar 3 nil nil nil nil nil) #s(comp-mvar 0 nil nil nil nil nil))
    (comment "LAP op byte-lss")
    (set #s(comp-mvar 2 nil nil nil nil nil) (callref < #s(comp-mvar 2 nil nil nil nil nil) #s(comp-mvar 3 nil nil nil nil nil)))
    (comment "LAP op byte-goto-if-nil-else-pop")
    (cond-jump #s(comp-mvar 2 nil nil nil nil nil) #s(comp-mvar nil nil t nil nil nil) bb_2 bb_3)
    <bb_3>
    (comment "LAP TAG 23")
    (comment "LAP op byte-return")
    (return #s(comp-mvar 2 nil nil nil nil nil))
    <bb_2>
    (comment "LAP op byte-constant")
    (setimm #s(comp-mvar 2 nil nil nil nil nil) 2 bar)
    (comment "LAP op byte-stack-ref")
    (set #s(comp-mvar 3 nil nil nil nil nil) #s(comp-mvar 1 nil nil nil nil nil))
    (comment "LAP op byte-call")
    (set #s(comp-mvar 2 nil nil nil nil nil) (callref funcall #s(comp-mvar 2 nil nil nil nil nil) #s(comp-mvar 3 nil nil nil nil nil)))
    (comment "LAP op byte-discard")
    (comment "LAP op byte-constant")
    (setimm #s(comp-mvar 2 nil nil nil nil nil) 3 message)
    (comment "LAP op byte-constant")
    (setimm #s(comp-mvar 3 nil nil nil nil nil) 4 "i is: %d")
    (comment "LAP op byte-stack-ref")
    (set #s(comp-mvar 4 nil nil nil nil nil) #s(comp-mvar 1 nil nil nil nil nil))
    (comment "LAP op byte-call")
    (set #s(comp-mvar 2 nil nil nil nil nil) (callref funcall #s(comp-mvar 2 nil nil nil nil nil) #s(comp-mvar 3 nil nil nil nil nil) #s(comp-mvar 4 nil nil nil nil nil)))
    (comment "LAP op byte-discard")
    (comment "LAP op byte-dup")
    (set #s(comp-mvar 2 nil nil nil nil nil) #s(comp-mvar 1 nil nil nil nil nil))
    (comment "LAP op byte-add1")
    (set #s(comp-mvar 2 nil nil nil nil nil) (call 1+ #s(comp-mvar 2 nil nil nil nil nil)))
    (comment "LAP op byte-stack-set")
    (set #s(comp-mvar 1 nil nil nil nil nil) #s(comp-mvar 2 nil nil nil nil nil))
    (comment "LAP op byte-goto")
    (jump bb_1)
    
    Function: top-level-run
    
    <entry>
    (comment "Top level")
    (set-par-to-local #s(comp-mvar 0 nil nil nil nil nil) 0)
    (call comp--register-subr #s(comp-mvar nil nil t foo nil nil) #s(comp-mvar nil nil t 1 nil nil) #s(comp-mvar nil nil t 1 nil nil) #s(comp-mvar nil nil t "F666f6f_foo" nil nil) #s(comp-mvar nil nil t "
    
    (fn X)" nil nil) #s(comp-mvar nil nil t nil nil nil) #s(comp-mvar 0 nil nil nil nil nil))
    (return #s(comp-mvar nil nil t t nil nil))
    

    As it's clear a new function has been created by this pass. top-level-run. This will come into play during the load process (more on that later).

Before comp-final

After comp-limplify the following passes are run: comp-ssa comp-propagate comp-call-optim comp-propagate comp-dead-code. These have been quickly introduced already and generally speaking belong to classic compiler theory.

Before entering comp-final this is the content of LIMPLE:

Function: foo

<entry>
(comment "Lisp function: foo")
(set-par-to-local #s(comp-mvar 0 5 nil nil nil nil) 0)
(jump bb_0)
<bb_0>
(comment "LAP op byte-constant")
(setimm #s(comp-mvar 1 6 t 0 fixnum nil) 0 0)
(jump bb_1)
<bb_1>
(phi #s(comp-mvar 4 7 nil nil nil t) #s(comp-mvar 4 19 nil nil nil t) #s(comp-mvar 4 4 nil nil nil t))
(phi #s(comp-mvar 3 8 nil nil nil t) #s(comp-mvar 3 18 t "i is: %d" string t) #s(comp-mvar 3 3 nil nil nil t))
(phi #s(comp-mvar 2 9 nil nil nil nil) #s(comp-mvar 2 22 nil nil number nil) #s(comp-mvar 2 2 nil nil nil nil))
(phi #s(comp-mvar 1 10 nil nil nil nil) #s(comp-mvar 1 23 nil nil number nil) #s(comp-mvar 1 6 t 0 fixnum nil))
(comment "LAP TAG 1")
(comment "LAP op byte-dup")
(set #s(comp-mvar 2 11 nil nil nil t) #s(comp-mvar 1 10 nil nil nil nil))
(comment "LAP op byte-stack-ref")
(set #s(comp-mvar 3 12 nil nil nil t) #s(comp-mvar 0 5 nil nil nil nil))
(comment "LAP op byte-lss")
(set #s(comp-mvar 2 13 nil nil nil nil) (callref < #s(comp-mvar 2 11 nil nil nil t) #s(comp-mvar 3 12 nil nil nil t)))
(comment "LAP op byte-goto-if-nil-else-pop")
(cond-jump #s(comp-mvar 2 13 nil nil nil nil) #s(comp-mvar nil nil t nil nil nil) bb_2 bb_3)
<bb_3>
(comment "LAP TAG 23")
(comment "LAP op byte-return")
(return #s(comp-mvar 2 13 nil nil nil nil))
<bb_2>
(comment "LAP op byte-constant")
(setimm #s(comp-mvar 2 14 t bar symbol t) 2 bar)
(comment "LAP op byte-stack-ref")
(set #s(comp-mvar 3 15 nil nil nil t) #s(comp-mvar 1 10 nil nil nil nil))
(comment "LAP op byte-call")
(callref funcall #s(comp-mvar 2 14 t bar symbol t) #s(comp-mvar 3 15 nil nil nil t))
(comment "LAP op byte-discard")
(comment "LAP op byte-constant")
(comment "optimized out: (setimm #s(comp-mvar 2 17 t message symbol t) 3 message)")
(comment "LAP op byte-constant")
(setimm #s(comp-mvar 3 18 t "i is: %d" string t) 4 "i is: %d")
(comment "LAP op byte-stack-ref")
(set #s(comp-mvar 4 19 nil nil nil t) #s(comp-mvar 1 10 nil nil nil nil))
(comment "LAP op byte-call")
(callref message #s(comp-mvar 3 18 t "i is: %d" string t) #s(comp-mvar 4 19 nil nil nil t))
(comment "LAP op byte-discard")
(comment "LAP op byte-dup")
(set #s(comp-mvar 2 21 nil nil nil nil) #s(comp-mvar 1 10 nil nil nil nil))
(comment "LAP op byte-add1")
(set #s(comp-mvar 2 22 nil nil number nil) (call 1+ #s(comp-mvar 2 21 nil nil nil nil)))
(comment "LAP op byte-stack-set")
(set #s(comp-mvar 1 23 nil nil number nil) #s(comp-mvar 2 22 nil nil number nil))
(comment "LAP op byte-goto")
(jump bb_1)

Function: top-level-run

<entry>
(comment "Top level")
(set-par-to-local #s(comp-mvar 0 1 nil nil nil nil) 0)
(call comp--register-subr #s(comp-mvar nil nil t foo nil nil) #s(comp-mvar nil nil t 1 nil nil) #s(comp-mvar nil nil t 1 nil nil) #s(comp-mvar nil nil t "F666f6f_foo" nil nil) #s(comp-mvar nil nil t "

(fn X)" nil nil) #s(comp-mvar nil nil t nil nil nil) #s(comp-mvar 0 1 nil nil nil nil))
(return #s(comp-mvar nil nil t t nil nil))

Few notes:

(phi #s(comp-mvar 1 10 nil nil nil nil) #s(comp-mvar 1 23 nil nil number nil) #s(comp-mvar 1 6 t 0 fixnum nil))

This phi is failing to propagate the type number into (comp-mvar 1 10) because at the moment there's no control over type hierarchy. (See comp-propagate-insn). It does not change much because now gccemacs can take advantage only of fixnums in the code generation.

Note also that (comp-mvar 1 23) is a number because it is computed as result of 1+ and can fall out of fixnums. A type hint could help here…

Also: The original sequence to call the subr message through the funcall trampoline was:

(setimm #s(comp-mvar 2 nil nil nil nil nil) 3 message)
(comment "LAP op byte-constant")
(setimm #s(comp-mvar 3 nil nil nil nil nil) 4 "i is: %d")
(comment "LAP op byte-stack-ref")
(set #s(comp-mvar 4 nil nil nil nil nil) #s(comp-mvar 1 nil nil nil nil nil))
(comment "LAP op byte-call")
(set #s(comp-mvar 2 nil nil nil nil nil) (callref funcall #s(comp-mvar 2 nil nil nil nil nil) #s(comp-mvar 3 nil nil nil nil nil) #s(comp-mvar 4 nil nil nil nil nil)))

This has been modified in:

(comment "optimized out: (setimm #s(comp-mvar 2 17 t message symbol t) 3 message)")
(comment "LAP op byte-constant")
(setimm #s(comp-mvar 3 18 t "i is: %d" string t) 4 "i is: %d")
(comment "LAP op byte-stack-ref")
(set #s(comp-mvar 4 19 nil nil nil t) #s(comp-mvar 1 10 nil nil nil nil))
(comment "LAP op byte-call")
(callref message #s(comp-mvar 3 18 t "i is: %d" string t) #s(comp-mvar 4 19 nil nil nil t))

This will translate into a direct call to Fmessage. Finally the unnecessary assignment to slot number 2 has been optimized out (message return value is unused).

comp-final (comp.el:1811)

Final is the pass that will drive libgccjit "replaying" the LIMPLE content.

The C entry point is Fcomp__compile_ctxt_to_file (comp.c:3102). Here are its main duties in order:

  1. Emit symbols and data structures needed to have the code re-loadable (emit_ctxt_code).
  2. Define all the functions that are exposed to GCC (see as example define_CAR_CDR).
  3. Declare all the functions to be compiled (declare_function)
  4. Define them (compile_function) into libgccjit IR.
  5. Run all the GCC pipe-line calling gcc_jit_context_compile_to_file.
  • compile_function

    This is the main entry point for converting LIMPLE into libgccjit IR. It basically iterates over the insns list calling emit_limple_insn.

    emit_limple_insn will then destructure every insn and emit the corresponding statements.

    Libgccjit can dump a pseudo-C representation of the libgccjit IR. gccemacs ask for that compiling with comp-debug > 0.

    Here follows for our function foo:

    extern union comp_Lisp_Object
    F666f6f_foo (union comp_Lisp_Object par_0)
    {
      union comp_Lisp_Object[5] local;
      union comp_Lisp_Object local0;
      union comp_Lisp_Object local1;
      union comp_Lisp_Object local2;
      union comp_Lisp_Object local3;
      union comp_Lisp_Object local4;
      struct comp_handler * c;
      union cast_union union_cast_28;
    
    entry:
      /* Lisp function: foo */
      local0 = par_0;
      goto bb_0;
    
    bb_0:
      /* LAP op byte-constant */
      /* 0 */
      local1 = d_reloc[(int)0];
      goto bb_1;
    
    bb_1:
      /* LAP TAG 1 */
      /* LAP op byte-dup */
      local[(int)2] = local1;
      /* LAP op byte-stack-ref */
      local[(int)3] = local0;
      /* LAP op byte-lss */
      /* calling subr: < */
      local2 = freloc_link_table->R3c_ ((long long)2, (&local[(int)2]));
      /* LAP op byte-goto-if-nil-else-pop */
      /* const lisp obj: nil */
      union_cast_28.v_p = (void *)NULL;
      /* EQ */
      /* XLI */
      /* XLI */
      if (local2.num == union_cast_28.lisp_obj.num) goto bb_3; else goto bb_2;
    
    bb_3:
      /* LAP TAG 23 */
      /* LAP op byte-return */
      return local2;
    
    bb_2:
      /* LAP op byte-constant */
      /* bar */
      local[(int)2] = d_reloc[(int)2];
      /* LAP op byte-stack-ref */
      local[(int)3] = local1;
      /* LAP op byte-call */
      /* calling subr: funcall */
      (void)freloc_link_table->R66756e63616c6c_funcall ((long long)2, (&local[(int)2]));
      /* LAP op byte-discard */
      /* LAP op byte-constant */
      /* optimized out: (setimm #s(comp-mvar 2 17 t message symbol t) 3 message) */
      /* LAP op byte-constant */
      /* "i is: %d" */
      local[(int)3] = d_reloc[(int)4];
      /* LAP op byte-stack-ref */
      local[(int)4] = local1;
      /* LAP op byte-call */
      /* calling subr: message */
      (void)freloc_link_table->R6d657373616765_message ((long long)2, (&local[(int)3]));
      /* LAP op byte-discard */
      /* LAP op byte-dup */
      local2 = local1;
      /* LAP op byte-add1 */
      local2 = add1 (local2, (bool)0);
      /* LAP op byte-stack-set */
      local1 = local2;
      /* LAP op byte-goto */
      goto bb_1;
    }
    
    

    Lisp objects are located into an array of Lisp Objects named d_reloc.

    Function pointers to call into Emacs are stored into the freloc_link_table structure. Here how Fmessage is called:

    (void)freloc_link_table->R6d657373616765_message ((long long)2, (&local[(int)3]));
    

    Interesting noting what happen to the original 1+ call, this is treated differently to the the call to message:

    local2 = add1 (local2, (bool)0);
    

    The original call to Fadd1 has been substituted with a call to add1, a static function defined to libgccjit by define_add1_sub1.

    The substitution happens because the inline emitter for 1+ was registered at comp.c:2954 during the first compiler context creation.

    Here follows the definition of add1 (also dumped into the pseudo-C).

    static union comp_Lisp_Object
    add1 (union comp_Lisp_Object n, bool cert_fixnum)
    {
      union cast_union union_cast_22;
      union cast_union union_cast_23;
      union comp_Lisp_Object lisp_obj_fixnum;
    
    entry_block:
      /* XFIXNUM */
      /* XLI */
      /* FIXNUMP */
      /* XLI */
      union_cast_22.ll = n.num >> (long long)0;
      union_cast_23.i = !(union_cast_22.u - (unsigned int)2 & (unsigned int)3);
      if ((cert_fixnum || union_cast_23.b) && n.num >> (long long)2 != (long long)2305843009213693951) goto inline_block; else goto fcall_block;
    
    inline_block:
      /* make_fixnum */
      /* lval_XLI */
      lisp_obj_fixnum.num = ((n.num >> (long long)2) + (long long)1 << (long long)2) + (long long)2;
      return lisp_obj_fixnum;
    
    fcall_block:
      /* calling subr: 1+ */
      return freloc_link_table->R312b_1 (n);
    }
    
    

    If the propagation would have been able to prove that n was a fixnum, add1 would have been called with cert_fixnum as true.

    This is used to signal GCC propagation itself how to optimize the conditions.

Load mechanism

To load the compilation unit:

(load "test.eln")

Fload has been modified and will call Fnative_elisp_load if a .eln file is found. This will call load_comp_unit.

load_comp_unit will dynlib_sym looking for the following symbols into our .eln file to perform with them the following actions:

  • Call freloc_hash (function returning a string to sign freloc_link_table) and check its content. Fail in case it does not match.
  • Set current_thread_reloc (static pointer holding the address of current_thread).
  • Set pure_reloc (static pointer holding the address of pure).
  • Set freloc_link_table (static pointer to a structure of function pointers in use to call from native compiled functions into Emacs primitives).
  • Call text_data_reloc (function returning a string) and read the vector of Lisp objects used by native compiled function. In this case:
[0 nil bar message "i is: %d" foo 1 "F666f6f_foo" "

(fn X)" t consp listp]
  • Set d_reloc (static array of Lisp objects in use by compiled functions).

After this top_level_run is called. Here's its pseudo source:

extern union comp_Lisp_Object
top_level_run (union comp_Lisp_Object par_0)
{
  union comp_Lisp_Object[1] local;
  union comp_Lisp_Object local0;
  struct comp_handler * c;
  union cast_union union_cast_29;
  union cast_union union_cast_30;
  union cast_union union_cast_31;

entry:
  /* Top level */
  local0 = par_0;
  /* const lisp obj: foo */
  /* 1 */
  union_cast_29.v_p = (void *)0x6;
  /* 1 */
  union_cast_30.v_p = (void *)0x6;
  /* const lisp obj: "F666f6f_foo" */
  /* const lisp obj: "

(fn X)" */
  /* const lisp obj: nil */
  union_cast_31.v_p = (void *)NULL;
  /* calling subr: comp--register-subr */
  (void)freloc_link_table->R636f6d702d2d72656769737465722d73756272_comp__register_subr (d_reloc[(long long)5], union_cast_29.lisp_obj, union_cast_30.lisp_obj, d_reloc[(long long)7], d_reloc[(long long)8], union_cast_31.lisp_obj, local0);
  /* const lisp obj: t */
  return d_reloc[(long long)9];
}

top_level_run will perform all the expected modifications to the enviroment by the load of the .eln file.

In this case it is just calling back into Emacs Fcomp__register_subr (comp.c:3294) asking to register a subr named foo (no other top level forms were present).

Since then foo is defined as subr, called and treated as that.

Primitive subrs and subrs from native compiled code are distinguishable using the predicate subr-native-elisp-p (data.c:870).

Here downloadable the full output of the pseudo C-code test.c

Update 2

<2019-12-26 Thu>

This update comes with some pretty major technical changes and features.

New function link table ABI

Going in the direction of having to load many eln files I thought it was good to have just one single function link table. This structure is the array of function pointers used by the native compiled code to call Emacs C primitives and functions supporting the run-time.

In the first proposed prototype this structure was defined and compiled differently for every eln file. This means that every eln had his own version of it including just and only the needed pointers.

If this is an advantage in terms of memory usage and locality for loading few eln it does not scale up very well. It also pulls in a circular dependency on the compiler with advice.el that is now removed and, generally speaking, the mechanism is quite simplified.

To produce a single function link table I needed a list of all defined subrs. defsubr has been modified to cons the list with that. This list is then used both by the compiler and by the load mechanism. It survives bootstrap and restarts in general being dumped as a conventional lisp structure.

The last missing caveat of this is hashing and verifying at load time that the eln file being loaded is complaint with the running system. This is to be done.

Garbage collector support

The expected behavior for this to have the eln file unloaded automatically when this is not in use anymore. This is achieved with the introduction of a new kind of object the native compilation unit.

The mechanism is quite simple. Every native lisp function has a reference to the original compilation unit. When this is not reachable anymore (because all functions defined into it got redefined) it is unloaded with a dlclose (or equivalent) while the object is garbage collected.

Image dump

This gave me quite some thinking. I guess some story telling will explain some of the rationale that lead to the current implementation. The two main options on the table were:

  1. Linker approach.

    Generating code with something that is essentially a C tool-chain using the system linker would have appeared as the first option.

    Still was unclear to me how to cope with side effects that are not barely function definitions. I guess some modification in the actual code-base would have been necessary.

    Also making something future proof for subsequent dumps of the Lisp image looked quite challenging.

  2. Including compiled functions into the image dump.

    Including single functions into the dump image had its difficulties too. Essentially I came to the conclusion that trying to break in functions an already compiled and linked shared library, store and then reload functions would translate into: having to parse the elf, re-implement part of the loader, make assumptions on the implementation etc… In short searching for troubles.

In my mind I formed the opinion that breaking the compilation unit in pieces was a mistake and I just had to cope with that. I then came up with the plan of storing each full eln file into the image dump.

This could work but the issue is that loading a shared library that resides into memory and not in a file came up not being completely trivial.

dlopen takes a filename as a parameter and I guess (I could be wrong) the secret reason for that is that the dynamic linker is just a program… and yes! In Unix programs work on files…

I then found about memfd_create as possible work-around. This a syscall present in recent Linux kernel that let you create a file descriptor from a piece of your user space memory without having to do any real file copy or real file-system access. The file will appear in some /proc sub-folder.

I wasn't very keen on this option because of it being non portable but it looked like the last option and some alternative equivalent to memfd_create seemed to exists for other non Linux kernels.

I was already half way into the implementation when I stepped back and wanted to rethink to the problem as a whole.

Why the Emacs dump image mechanism was created?

Startup time is the motivation for that.

The fact that the result is a single file (or two in case of the portable dumper) is, I think, just a (nice) side effect.

Having a single file to load is faster mainly because objects in it are serialized in a binary format and don't need to go through the reader (as it is the case for elc files).

But thinking about functions into eln files are already serialized being compiled into machine code and the action of calling dlsym is just doing a lookup asking for the entry point address into memory.

So what's the motivation of dumping if we have already eln files? Couldn't we just load them?

No. This would be sub optimal because still in the eln file format there is a piece of data left that goes through the reader during its load. These are the objects used by functions. We are talking about a vector that is the per compilation unit aggregation of the original constant vectors. And last but not least is absolutely nice to have a system where all the enviroment state is dump-able.

Given all these constraints and thoughts my final solution is:

  • store into the compilation unit Lisp object (already defined for the gc support) a reference to the eln file originally loaded.
  • at dump-time just dump the compilation unit object but do not include the eln file content in the dump file.
  • at compilation unit load time (during relocation to use precise pdump nomenclature) revive it with a dlopen on the original eln.
  • after all compilation units have been relocated proceed with native functions relocating all function pointer doing dlsym into the compilation unit.

The good side effect of this mechanism is that the data vector does not have to be read by the reader because is naturally dumped and restored by the pdumper infrastructure.

Am I cheating then? Maybe, but I think this solution is quite sound. Nevertheless in case having a unique file is an hard requirement falling back to an memfd_create like solution would be just an incremental step on the implemented mechanism.

Here again is missing a sanity check to ensure the eln file being reloaded has not been changed. To be done.

Bootstrap

The standard bootstrap process is not radically changed. Essentially where elc were compiled eln files are compiled too. Note that the reason why elc compilation is still necessary is because the native compiler does not support dynamic scope.

When the native compiler fails because of this, Emacs will have the elc file at disposal for load. A cleaner solution would be to distinguish between dynamic and lexical scoped files while defining compilation targets but… good for now.

I've taken few measures of time regarding the full bootstrap process from a clean repository (including C compilation) on my dual core laptop.

Emacs baseline gccemacs gccemacs
  comp-speed 0 comp-speed 2
~8min ~25min ~6h

I've left speed 0 as default for now.

I tried to perform some measurement on startup time using:

time ~/emacs/build/src/emacs -batch

But on my system the difference looks within the noise level and the startup time for both is ~0.1s.

Similar result trying to measure the difference of time needed to (require 'org).

Function disassembly

I've added an initial support to the disassemble function for native compiled code. This is what it looks like disassembling on my machine bubble-no-cons from elisp-benchmarks.

Note some pseudo C code and comments are intermixed with the assembly by objdump. This is because the function was compiled using comp-debug 1 therefore some debug symbols are emitted to ease the compiler debug.

Update 1

<2019-12-08 Sun>

I've implemented the support for docstrings and interactive functions.

As a consequence as of today all lexically scoped Elisp functions can be native compiled with no exceptions.

Ex: this is the output I get for C-h f org-mode:

org-mode is native compiled Lisp function in ‘org.el’.

(org-mode)

  Parent mode: ‘outline-mode’.
  Probably introduced at or before Emacs version 22.1.

Outline-based notes management and organizer, alias
"Carsten’s outline-mode for keeping track of everything."

Org mode develops organizational tasks around a NOTES file which
contains information about projects as plain text.  Org mode is
...

As expected clicking on org.el correctly drives me to the original location in the source code.

Being native compiled org-mode function cell satisfies subrp.

(subrp (symbol-function #'org-mode)) => t

I'll try to keep the /master branch as the one with the most updated working state for the project.

Please fire a mail to akrl AT sdf DOT org in case you find it to be broken.

gccemacs

<2019-11-27 Wed>

gccemacs is a modified Emacs capable of compiling and running Emacs Lisp as native code in form of re-loadable elf files. As the name suggests this is achieved blending together Emacs and the gcc infrastructure.

The scope of this project is to experiment and discuss the proposed compiler design on the emacs-devel mailing list (part1, part2).

I've no current plan to have this as a long standing maintained out of tree Emacs fork.

Main tech goals for gccemacs are:

  • show that elisp run-time can be considerably improved by the proposed compiler approach.
  • test for a viable and non disruptive way to have a more capable lisp implementation.

Having elisp self-hosted would enable not only a faster environment but also an easier Emacs to be modified requiring less C code to be written.

The actual code can be found here:

https://gitlab.com/koral/gccemacs See (Update 4)!

Not a jitter

gccemacs is not a jitter. The compilation output (.eln files) are elf files suitable for being reloaded between different sessions.

I believe this solution has advantages over the jitter approach not having the duty to recompile at every start-up the same code.

This is a key advantage especially for an application sensitive to the start-up time as Emacs is. Moreover more time can be spent for advanced compile time optimizations.

Some complication to have the system re-loadable exists but is worth paying.

A further advantage of this solution is the potential for reducing load-time. Optimizing for load time should be further investigated.

libgccjit

Plugging into the gcc infrastructure is done through libgccjit.

Despite what the name suggest libgccjit is usable not just for making jitters but for any other kind of compiler.

libgccjit can be seen as a way to drive the gcc in form of shared library using function calls. In other words this is probably the 'cheapest' way to write what is technically named a front-end in compiler jargon.

Usage

Which libgccjit

libgccjit was added to gcc 5 thanks to David Malcolm. gccemacs should not be using any recently added entry point therefore in theory any libgccjit should be fine. BUT:

  • gcc 9 was affected by PR91928 https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91928. I fixed it for the trunk and did the back port into the 9 branch so as of revision 276625 also gcc 9 should be functional again. gcc 8 and earlier should not be affected but I had no time to do any serious test with them.
  • gcc 7 shipped with Ubuntu 18.04 was reported not to pass gccemacs tests.

Consider these information if you want to try the libgccjit shipped by your distribution.

I suggest a recent gcc 9 if you are going to compile. Please report any success or failure of using gccemacs with different versions of gcc otherwise.

libgccjit compilation instructions and doc is here: https://gcc.gnu.org/onlinedocs/jit/index.html

Compilation

Once libgccjit is installed a normal autogen.sh + configure + make should do the job for gccemacs.

Once compiled, the native compiler tests can be run with:

~/emacs/src/emacs -batch -l ert -l ~/emacs/test/src/comp-tests.el -f ert-run-tests-batch-and-exit

(adjust path accordingly).

Entry points

gccemacs adds two main entry points for compilation:

  • native-compile
  • native-compile-async

Some special variables influence compilation most notably comp-speed. This can range from 0 to 3 as in Common Lisp style.

See the doc for details.

Once a compilation unit is compiled the output is an .eln file. This similarly to an .elc and can be loaded by require or manually.

A lisp file can be compiled as follows:

(native-compile-async "file.el")

Once the compilation completes, the produced file.eln can be loaded conventionally.

Please note that file.el has to be lexically bound in order to be compiled.

A native compiled function foo should then present on a describe-function something like:

foo is a built-in function in ‘C source code’.

A more common usage is to compile more files or entire folders. The following example will compile asynchronously all the installed packages (the number 4 is equivalent to -j4 for make).

(native-compile-async "~/.emacs.d/elpa/" 4 t)

Progress can be monitored into the *Async-native-compile-log* buffer.

A foreword about why optimizing outside gcc

While I'm writing I recall few (I think good) reasons why I decided to replicate some optimizing compiler algorithmics outside gcc:

  • The gcc infrastructure has no knowledge that a call to a primitive (say Fcons) will return an object with certain tag bits set (in this case say is a cons). The only feasible way to inform it would be to use LTO but I've some doubts this would be practical for this application.
  • I decided to layout the activation frame with an array to hold lisp objects local to the function. This not to have to move and store these whenever a call by ref is emitted. To avoid having the compiler clobber all the local objects in the activation frame every time one of these function calls is emitted, I decided to lift the objects that are not involved by this kind of calls in the equivalent of conventional automatic local variables. To do this I need to propagate the ref property within the control flow graph.
  • Gcc infrastructure has no knowledge of what lisp pure functions are therefore this is a barrier for its propagation.
  • Gcc does not provide help for boxing and unboxing values.
  • The propagation engine can be used for giving warnings and errors at compile time.

Internals

gccemacs compilation process can be divided into 3 main phases:

  • byte-compilation

    This is the conventional Emacs byte compilation phase, is needed because the output of the byte compiler is used as input for the following phase.

  • middle-end

    Here the code goes through different transformations. This is implemented into lisp/emacs-lisp/comp.el

  • back-end

    This defines inliner and helper functions and drives libgccjit for the final code generation. It is implemented in src/comp.c.

Passes

The gccemacs native compiler is divided in the following passes:

  • spill-lap:

    The input used for compiling is the internal representation created by the byte compiler (LAP). This is used to get the byte-code before being assembled. This pass is responsible for running the byte compiler end extracting the LAP IR.

  • limplify:

    The main internal representation used by gccemacs is called LIMPLE (as a tribute to the gcc GIMPLE). This pass is responsible for converting LAP output into LIMPLE. At the base of LIMPLE is the struct comp-mvar representing a meta-variable.

       (cl-defstruct (comp-mvar (:constructor make--comp-mvar))
         "A meta-variable being a slot in the meta-stack."
         (slot nil :type fixnum
               :documentation "Slot number.
       -1 is a special value and indicates the scratch slot.")
         (id nil :type (or null number)
             :documentation "SSA number when in SSA form.")
         (const-vld nil :type boolean
                    :documentation "Valid signal for the following slot.")
         (constant nil
                   :documentation "When const-vld non nil this is used for holding
        a value known at compile time.")
         (type nil
               :documentation "When non nil indicates the type when known at compile
    time.")
         (ref nil :type boolean
              :documentation "When t the m-var is involved in a call where is passed by
        reference."))
    

    As an example the following LIMPLE insn assigns to the frame slot number 9 the symbol a (found as third element into the relocation array).

    (setimm #s(comp-mvar 9 1 t 3 nil) 3 a)
    

    As other example, this is moving the 5th frame slot content into the 2nd frame slot.

    (set #s(comp-mvar 2 6 nil nil nil nil) #s(comp-mvar 5 4 nil nil nil nil))
    

    At this stage the function gets decomposed into basic blocks being each of these a list of insns.

    In this phase the original push & pop within the byte-vm stack is translated into a sequence of assignments within the frame slots of the function. Every slot into the frame is represented by a comp-mvar. All this spurious moves will be eventually optimized. It is important to highlight that doing this we are moving all the push and pop machinery from the run-time into the compile time.

  • ssa

    This is responsible for porting LIMPLE into SSA form. After this pass every m-var gets a unique id and is assigned only once.

    Contextually also phi functions are placed.

  • propagate

    This pass iteratively propagates within the control flow graph for each m-var the following properties: value, type and if the m-var will be used for a function call by reference.

    Propagate removes also function calls to pure functions when all the arguments are or become known during propagation.

    NOTE: this is done also by the byte optimizer but propagate has greater chances to succeeds due to the CFG analysis itself.

  • call-optim

    This is responsible for (depending on the optimization level) trying to emit call that do not go through the funcall trampoline.

    Note that after this all calls to subrs can have equal dignity despite the fact that they originally got a dedicate byte-op-code ore not.

    Please note also that at speed 3 the compiler is optimizing also calls within the same compilation unit. This saves from the trampoline cost and allow gcc for further in-lining and propagation but makes this functions non safely re-definable unless the whole compilation unit is recompiled.

  • dead-code

    This tries to remove unnecessary assignments.

    Spurious assignment to non floating frame slots cannot be optimized out by gcc in case the frame is clobbered by function calls with the frame used by reference so is good to remove them.

  • final

    This drives LIMPLE into libgccjit IR and invokes the compilation. Final is also responsible for:

    • Defining the inline functions that gives gcc visibility on the lisp implementation.
    • Suggesting to them the correct type if available while emitting the function call.
    • Lifting variables from the frame array if these are to be located into the floating frame.

Compiler hints

Having realized it was quite easy to feed the propagation engine with additional information I've implemented two entry points.

These are:

  • comp-hint-fixnum
  • comp-hint-cons

As an example this is to promise that the result of the following expression evaluates to a cons:

(comp-hint-cons x)

Propagation engine will use this information to propagate for all the m-vars influenced by the evaluation result of (comp-hint-cons x) in the control flow graph.

At comp-speed smaller then 3 the compiler hints will translate into assertions verifying the property declared.

These low level primitives could be used to implement something like the CL the.

Type checks removal

This section applies to the (few) functions exposed to the gcc, namely: car, cdr, setcar, setcdr, 1+, 1-, - (Fnegate).

At comp-speed 3 type checks will not be emitted if the propagation has proved the type to be the expected one for the called function.

Debugging the compiler

The native compiler can be debugged in several ways but I think the most propaedeutic one is to set comp-debug to 1. In this way the compiler will depose a .c file aside the .eln.

This is not a true compilable C file but something sufficiently close to understand what's going on. Also debug symbols are emitted into the .eln therefore is possible to trap or step into it using gdb.

Note that comp-debug set to 1 has a negative impact on compile time.

Quick performance discussion

In order to have something to optimize for and measure we have put together a small collection of elisp (nano and non) benchmarks.

https://gitlab.com/koral/elisp-benchmarks

The choice of the benchmarks and their weight is certainly very arbitrary but still… better than nothing.

All benchmarks can be considered u-benchmarks except nbody and dhrystone. Worth nothing that these last two has not been tuned specifically for the native compiler.

On the machine I'm running (i7-6600U) the latest performance figure I've got is the following:

name byte-code native-bench native-all native-all vs.
  (sec) (sec) (sec) byte-code
bubble 30.44 6.54 6.29 4.8x
bubble-no-cons 42.42 10.22 10.20 4.2x
fibn 39.36 16.96 16.98 2.3x
fibn-rec 20.64 8.06 7.99 2.6x
fibn-tc 19.43 6.33 6.89 2.8x
inclist-tc 20.43 2.16 2.18 9.4x
listlen-tc 17.66 0.66 0.70 25.2x
inclist-no-type-hints 45.57 5.60 5.81 7.8x
inclist-type-hints 45.89 3.67 3.70 12.4x
nbody 112.99 23.65 24.44 4.6x
dhrystone 112.40 64.67 47.12 2.4x
tot 507.23 148.52 132.3 3.8x

Column explanation:

  • byte-code benchmark running byte compiled (baseline).
  • native-bench benchmark running native compiled on a non native compiled emacs.
  • native-all benchmark running with the benchmark itself and all emacs lisp native compiled.

All benchmarks has been compiled with comp-speed 3 while the Emacs lisp for the native-all column with comp-speed 2.

Here follows a very preliminary and incomplete performance discussion:

bubble bubble-no-cons fibn u-benchmarks shows fair results with no specific tuning.

Recursive benchmarks (suffixed with -rec or -tc) shows an enormous uplifts most likely dominated by much faster calling convention. Note that no TCO here is performed.

inclist-no-type-hints and inclist-type-hints shows a considerable performance uplift just due to type checks being optimized out thanks to the information coming from the type hints.

nbody is ~4.6x faster with no lisp specific optimization triggered. This benchmark calls only primitives (as all previous u-benchs) therefore having all Emacs native compiled does not gives any advantage. Probably intra compilation unit call optimization plays a role into the result.

On the other side dhrystone benefits quite a lot from the full Emacs compilation having to funcall into store-substring and string>.

Hope this indicates a potential for considerable performance benefits.

Status and verification.

A number of tests is defined in comp-tests.el. This is including a number of micro tests (some of these inherited by Tom Tromey's jitter) plus a classical bootstrap test were the compiler compiles itself two times and the binary is compared.

A couple of colleagues and me are running in production gccemacs with all the Emacs lexically scoped lisp compiled plus all the elpa folder for each of our setups.

While I'm writing this I've loaded on this instance 18682 native compiled elisp functions loaded from 669 .eln files.

So far this seems stable.

Current limitations

The following limitations are not due to design limitations but just the state of a "work in progress":

  • Only lexically scoped code is compiled.
  • Interactive functions are included into eln files but as byte-compiled. (Update 1)
  • No doc string support for native compiled functions. (Update 1)
  • Limited lisp implementation exposure: Just few function are defined and exposed to gcc: car, cdr, setcar, setcdr, 1+, 1-, - (Fnegate). This functions can be fully in-lined and optimized.
  • Type propagation:

    Very few functions have a return type declared to the compiler (see comp-known-ret-types). Moreover type propagation can't relay on the CL equivalent of sub-type-p.

  • Garbage collection support has to be done: (Update 2)

    Once a .eln file is loaded it is never unloaded even if the subrs there defined are redefined.

  • No integration with the build system: (Update 2)

    No native compilation is automatically triggered by the build process.

  • Native compiler is not re-entrant:

    Just top level functions are native compiled, the others (lambda included) are still kept as byte-code.

Further compiler improvements

  • Better lisp exposure

    As mentioned in the previous paragraph the lisp machinery exposed to the gcc infrastructure is at this point quite limited and should be improved.

  • Unboxing

    Introducing the unboxing mechanism was considered out of scope for this investigation but is certainly possible to extend the current infrastructure to have it. Unboxing would have a big impact on performance. Combined with compiler hints should be possible to generate efficient code closing some more gap with C (in some cases probably all).

Long term

  • Further improving load time performance using the portable dumper for object de/serialization within elns (currently the print read mechanism is in use).
  • Provide better compile time warning and errors relying on the propagation engine?
  • The real final goal would be to be able to assemble the Emacs image having the native compiled functions replacing all the byte-compiled one out of the box. EDIT: See (Update 2)!

Final word

All of this is very experimental. Take it (at your risk) as what it is, a proof of concept.

Feedback is very welcome.

Author: Andrea Corallo

Created: 2020-05-16 Sat 15:54

Validate