Falaise  4.0.1
SuperNEMO Software Toolkit
Writing FLReconstruct Modules

Introduction to the writing of FLReconstruct modules

If you have just started using Falaise or the FLReconstruct application, we strongly recommend that you familiarize yourself with the basic usage of FLReconstruct covered in The FLReconstruct Application.

FLReconstruct uses a pipeline pattern to process events. You can view this as a production line with each stage on the line performing some operation on the event. Each stage in the pipeline is called a "pipeline module" (or just "module") and is implemented as a C++ class. The FLReconstruct application can load new modules at runtime using a "plugin" mechanism. Scripting, as demonstrated in the tutorial on using FLReconstruct, is used to load new modules from plugins, select the modules to use in the pipeline, and configure each module.

In this tutorial we will see how to implement our own modules for use in the FLReconstruct pipeline. This will cover

  1. Writing a basic C++ module class
  2. Compiling the class into a plugin for use by FLReconstruct
  3. Scripting for use of the plugin by FLReconstruct
  4. Implementing runtime module configuration

Getting your module to actually do something with the events that are passed to it is deferred to a later tutorial.

Implementing a Minimal flreconstruct Module

Creating the Module Source Code

We begin by creating an empty directory to hold the source code for our example module, which we'll name "MyModule"

$ cd MyWorkSpace
$ mkdir MyModule
$ cd MyModule
$ ls
$

You are free to organise the source code under this directory as you see fit. In this very simple case we will just place all files in the MyModule directory without any subdirectories. We start by creating the implementation file, for the C++ class, which we'll name MyModule.cpp

// Interface from Falaise
class MyModule {
public:
// Default constructor
MyModule() = default;
// User-defined Constructor
MyModule(falaise::config::property_set const& /*ps*/,
datatools::service_manager& /*services*/) {
}
// Process event
std::cout << "MyModule::process called!\n";
return falaise::processing::status::PROCESS_OK;
}
};
// Register module with Falaise's plugin system on load

Here we can see the minimal interface and infrastructure required by a module class for flreconstruct. The class must implement:

To make the plugin we'll build from this code loadable by flreconstruct we must also use the FALAISE_REGISTER_MODULE macro, passing it the class's typename. This will also become a string that can be used to create a module of this type in an flreconstruct pipeline script.

The non-default constructor is responsible for initializing the module using, if required, the information supplied in the falaise::config::property_set and datatools::service_manager objects. Our basic module doesn't require any configuration or service information so we simply ignore these arguments. Later tutorials will cover module configuration and use of services by modules.

The process member function performs the actual operation on the event, which is represented by a datatools::things instance. It is passed via non-const reference so process can both read and write data to the event. As noted above, a later tutorial will cover the interface and use of datatools::things. We therefore don't do anything with the event, and simply write a message to standard output so that we'll be able to see the method being called in flreconstruct. process must return a processing exit code. In this case, our processing is always successful, so we return falaise::processing::status::PROCESS_OK.

Building the Loadable Shared Library

With the source code for MyModule in place we need to build a shared library from it that flreconstruct can load at runtime to make MyModule usable in a pipeline. As MyModule uses components from Falaise, the compilation needs to use its headers, libraries and dependencies. The simplest way to set this up is to use CMake to build the shared library and make use of Falaise's find_package support.

To do this, we add a CMake script alongside the sources:

$ ls
MyModule.cpp
$ touch CMakeLists.txt
$ ls
CMakeLists.txt MyModule.cpp
$

The implementation of CMakeLists.txt is very straightforward:

# Check cmake version meets our requirements
cmake_minimum_required(VERSION 3.9)
# Declare project, which will configure compiler for us
project(MyModule)
# Modules use Falaise, so we need to locate this or fail
# Locating Falaise will automatically locate all of its
# dependencies such as Bayeux, ROOT and Boost.
find_package(Falaise REQUIRED)
# Build a shared/dynamic library from our source
add_library(MyModule SHARED MyModule.cpp)
# Link this library to the FalaiseModule library
# This ensures the correct compiler flags, include paths
# and linker flags are used to compile our library.
target_link_libraries(MyModule Falaise::FalaiseModule)

Comments begin with a #. The first two commands simply setup CMake and the compiler for us. The find_package command will locate Falaise for us, and we supply the REQUIRED argument to ensure CMake will fail if a Falaise install cannot be found. The add_library command creates the actual shared library. Breaking the arguments to add_library down one by one:

  1. MyModule : the name of the library, which will be used to create the on disk name. For example, on Linux, this will output a library file libMyModule.so, and on Mac OS X a library file libMyModule.dylib.
  2. SHARED : the type of the library, in this case a dynamic library.
  3. MyModule.cpp : all the sources need to build the library.

Finally, the target_link_libraries command links the shared library to Falaise's Falaise::FalaiseModule target. This ensures that compilation and linking of the MyModule target will use the correct compiler and linker flags for use of Falaise. The flreconstruct application makes a default set of libraries available, and if you require use of additional ones, CMake must be set up to find and use these. This is documented later in this tutorial.

For more detailed documentation on CMake, please refer to its online help.

To build the library, we first create a so-called build directory to hold the files generated by the compilation to isolate them from the source code. This means we can very quickly delete and recreate the build without worrying about deleting the primary sources (it also helps to avoid accidental commits of local build artifacts to Git!). This directory can be wherever you like, but it's usually most convenient to create it alongside the directory in which the sources reside. In this example we have the directory structure:

$ pwd
/path/to/MyWorkSpace
$ tree .
.
`-- MyModule
|-- CMakeLists.txt
`-- MyModule.cpp
1 directory, 2 files
$

so we create the build directory under /path/to/MyWorkSpace as

$ mkdir MyModule-build
$ tree .
.
|-- MyModule
| |-- CMakeLists.txt
| `-- MyModule.cpp
`-- MyModule-build
2 directories, 2 files
$

The first step of the build is to change into the build directory and run cmake to configure the build of MyModule:

$ cd MyModule-build
$ cmake -DCMAKE_PREFIX_PATH=/where/Falaise/is ../MyModule

Here, the CMAKE_PREFIX_PATH argument should be the directory under which Falaise was installed. If you installed Falaise using brew and are using the snemo-shell environment then you will not need to set this. The last argument ../MyModule points CMake to the directory holding the CMakeLists.txt file for the project we want to build, in this case our custom module.

Running the command will produce output that is highly system dependent, but you should see something along the lines of

$ cmake -DCMAKE_PREFIX_PATH=/where/Falaise/is ../MyModule
-- The C compiler identification is AppleClang 10.0.0.10001044
-- The CXX compiler identification is AppleClang 10.0.0.10001044
-- Check for working C compiler: /Library/Developer/CommandLineTools/usr/bin/cc
-- Check for working C compiler: /Library/Developer/CommandLineTools/usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /Library/Developer/CommandLineTools/usr/bin/c++
-- Check for working CXX compiler: /Library/Developer/CommandLineTools/usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
...
-- Configuring done
-- Generating done
-- Build files have been written to: /..../MyModule-build
$

The exact output will depend on which compiler and platform you are using. However, the last three lines are common apart from the path, and indicate a successful configuration. Listing the contents of the directory shows that CMake has generated a Makefile for us:

$ ls
CMakeCache.txt CMakeFiles cmake_install.cmake Makefile
$

To build the library for our module we simply run make:

$ make
Scanning dependencies of target MyModule
[ 50%] Building CXX object CMakeFiles/MyModule.dir/MyModule.cpp.o
[100%] Linking CXX shared library libMyModule.dylib
[100%] Built target MyModule
$

If the build succeeds, we now have the shared library present in our build directory:

$ ls
CMakeCache.txt CMakeFiles Makefile cmake_install.cmake libMyModule.dylib
$

Note that the extension of the shared library is platform dependent (.dylib for Mac, .so on Linux). With the library built, we now need to make flreconstruct aware of it so we can use MyModule in a pipeline.

Running flreconstruct With a Custom Module

To use our new module in flreconstruct we need to tell the application about it before using it in a pipeline. We do this through the pipeline script we pass to flreconstruct via

  1. Adding a new section named flreconstruct.plugins which tells flreconstruct about libraries to be loaded.
  2. Adding a section declaring our module

We create a script named MyModulePipeline.conf in our project directory:

$ pwd
/path/to/MyWorkSpace/MyModule
$ ls
CMakeLists.txt MyModule.cpp MyModulePipeline.conf

This script takes the same basic form as shown in the tutorial on using flreconstruct:

# - Configuration Metadata
#@description Chain pipeline using a single custom module
#@key_label "name"
#@meta_label "type"
# - Custom modules
# The "flreconstruct.plugins" section to tell flreconstruct what
# to load and from where.
[name="flreconstruct.plugins" type="flreconstruct::section"]
plugins : string[1] = "MyModule"
MyModule.directory : string = "."
# - Pipeline configuration
# Must define "pipeline" as this is the main module flreconstruct will use
# Make it use our custom module by setting the 'type' key to its typename
# At present, it takes no configuration, so it suffices to declare it
[name="pipeline" type="MyModule"]

The plugins key in the flreconstruct.plugins section is a list of strings naming the libraries to be loaded by flreconstruct at startup. These are taken as the "basename" of the library, from which the full physical file to be loaded, lib<basename>.{so,dylib}, is constructed. flreconstruct only searches for plugin libraries in its builtin location by default, so custom modules must set the <basename>.directory property to tell it the path under which their <basename> library is located.

In the above example, MyModule.directory : string ="." tells flreconstruct to look in the current working directory, i.e. the directory from which it was run, for the MyModule plugin. This is convenient for testing a local build of a module, as we can run flreconstruct directly from the build directory of our module and it will locate the library immediately. You can also specify absolute paths, e.g.

[name="flreconstruct.plugins" type="flreconstruct::section"]
plugins : string[1] = "MyModule"
MyModule.directory : string = "/path/to/MyWorkSpace/MyModule-build"

or paths containing environment variables which will be expanded automatically, e.g.

[name="flreconstruct.plugins" type="flreconstruct::section"]
plugins : string[1] = "MyModule"
MyModule.directory : string = "${MYMODULE_PATH}"

With the loading of the custom module in place, we can use it in the script as we did for the builtin modules. As we did in in the trivial pipeline example for flreconstruct, we can simply declare the main pipeline module as being of the MyModule type, hence the line

[name="pipeline" type="MyModule"]

Note that the type key value must always be the full typename of the module, as used in the FALAISE_REGISTER_MODULE macro. Remember that in MyModule.cpp we called the macro as:

thus type is just "MyModule".

We can now run flreconstruct with MyModulePipeline.conf as the pipeline script. Because we've specified the location of the MyModule library as the working directory, we first change to the directory in which this library resides, namely our build directory. We also need to have a file to process, so we run flsimulate first to create a simple file of one event (NB in the following, we assume you have flsimulate and flreconstruct in your PATH).

$ cd /path/to/MyWorkSpace/MyModule-build
$ ls
CMakeCache.txt CMakeFiles cmake_install.cmake libMyModule.dylib Makefile
$ flsimulate -o MyModuleTest.brio
....
$ ls
CMakeCache.txt cmake_install.cmake Makefile
CMakeFiles libMyModule.dylib MyModuleTest.brio
$ flreconstruct -i MyModuleTest.brio -p ../MyModule/MyModulePipeline.conf
[notice:void datatools::library_loader::init():449] Automatic loading of library 'MyModule'...
MyModule::process called!
$

We can see that flreconstruct loaded the MyModule library, and the MyModule::process method was called, showing that the pipeline used our custom module! We can also add our module into a chain pipeline and other pipeline structures. For example, try the following pipeline script:

# - Configuration
#@description Simple pipeline using a chain
#@key_label "name"
#@meta_label "type"
# - Module load section
[name="flreconstruct.plugins" type="flreconstruct::section"]
plugins : string[1] = "MyModule"
MyModule.directory : string = "."
# Must define "pipeline" as this is the module flreconstruct will use
[name="pipeline" type="dpp::chain_module"]
modules : string[3] = "start_module" "dump" "end_module"
[name="start_module" type="MyModule"]
[name="dump" type="dpp::dump_module"]
[name="end_module" type="MyModule"]

You should see each event being dumped, with the dumped info being bracketed by the MyModule::process called! text from each of the MyModule instances in the chain.

Making Your Module Configurable

The minimal module presented in the section above outputs a fixed message which can only be changed by modifying the code and recompiling the module. In most use cases hard-coding like this is sufficient, but if your module has parameters that may change frequently (e.g. a threshold that requires optimization), it is easy to make them configurable at runtime through the pipeline script. To demonstrate this, we'll modify the MyModule class from earlier to have a single std::string type data member and make this configurable from the pipeline script.

Adding a Configurable Data Member

To add a configurable data member to MyModule, we modify the code as follows:

// Interface from Falaise
class MyModule {
public:
MyModule() = default;
MyModule(falaise::config::property_set const& ps,
: message(ps.get<std::string>("message")) {}
std::cout << "MyModule::process says '" << message << "'\n";
return falaise::processing::status::PROCESS_OK;
}
private:
std::string message{};
};

The key changes are:

  1. std::string data member message
  2. Use of the data member in the process member function
  3. Use of the falaise::config::property_set instance ps passed to the user-defined constructor to extract configuration information

Here, message is our configurable parameter, and is initialized in the MyModule constructor using the falaise::config::property_set::get member function. We supply std::string as the template argument as that is the type we need, and message as the parameter ID to extract. This ID does not have to match the name of the data member, but it is useful to do so for clarity.

As configuration is always done through the constructor, you can then use configured data members just like any other. In this case we simply report the value of message to standard output in the process member function.

Building a Loadable Shared Library for a Configurable Module

No special build setup is needed for a configurable module, so you can use the CMake script exactly as given for the basic module above. If you've made the changes as above, simply rebuild!

Configuring MyModule from the Pipeline Script

In the preceding section, we saw that module configuration is passed to a module through an instance of the falaise::config::property_set class. This instance is created by flreconstruct for the module from the properties, if any, supplied in the section of the pipeline script defining the module. To begin with, we can use the pipeline script from earlier to run the configurable module, simply adding the required string parameter message to its section:

# - Configuration Metadata
#@description Chain pipeline using a single custom module
#@key_label "name"
#@meta_label "type"
# - Custom modules
# The "flreconstruct.plugins" section to tell flreconstruct what
# to load and from where.
[name="flreconstruct.plugins" type="flreconstruct::section"]
plugins : string[1] = "MyModule"
MyModule.directory : string = "."
# - Pipeline configuration
# Must define "pipeline" as this is the module flreconstruct will use
# Make it use our custom module by setting the 'type' key to its typename
# and supply the required "message" string parameter
[name="pipeline" type="MyModule"]
message : string = "hello"

The key name message and its type must match that looked for by MyModule's constructor in the supplied falaise::config::property_set. Allowed key/types and their mappings to C++ types are documented in a later section. The script can be run in flreconstruct as before:

$ cd /path/to/MyWorkSpace/MyModule-build
$ ls
CMakeCache.txt cmake_install.cmake Makefile
CMakeFiles libMyModule.so MyModuleTest.brio
$ flreconstruct -i MyModuleTest.brio -p ../MyModule/MyModulePipeline.conf
[notice:void datatools::library_loader::_init():467] Automatic loading of library 'MyModule'...
MyModule::process says 'hello'
$

We can see that the module has been run using the supplied value for the parameter. To change the message parameter, we simply update its value, e.g.

[name="pipeline" type="MyModule"]
message : string = "goodbye"

Having add the key, we can rerun with the updated pipeline script:

$ flreconstruct -i MyModuleTest.brio -p ../MyModule/MyModulePipeline.conf
[notice:void datatools::library_loader::_init():467] Automatic loading of library 'MyModule'...
MyModule::process says 'goodbye'
$

and see that the parameter has been changed to the value defined in the script. Keys are bound to the section they are defined in, so we can use the same module type multiple times but with different parameters. For example, try the following pipeline script:

# - Configuration Metadata
#@description Chain pipeline using a single custom module
#@key_label "name"
#@meta_label "type"
# - Custom modules
# The "flreconstruct.plugins" section to tell flreconstruct what
# to load and from where.
[name="flreconstruct.plugins" type="flreconstruct::section"]
plugins : string[1] = "MyModule"
MyModule.directory : string = "."
# - Pipeline configuration
# Chain dpp::dump_module between two MyModules
[name="pipeline" type="dpp::chain_module"]
modules : string [3] = "hello" "process" "goodbye"
# - Per Module configurations
[name="hello" type="MyModule"]
message : string = "hello"
[name="process" type="dpp::dump_module"]
[name="goodbye" type="MyModule"]
message :string = "goodbye"

You should see each event being dumped, with the dumped info being bracketed by the output from each MyModule instance, each with different values of the message parameter.

Both flreconstruct and falaise::config::property_set work together to check that needed parameters are supplied and of the correct type. For example, if we did not supply the message parameter:

# - Configuration Metadata
#@description Chain pipeline using a single custom module
#@key_label "name"
#@meta_label "type"
# - Custom modules
# The "flreconstruct.plugins" section to tell flreconstruct what
# to load and from where.
[name="flreconstruct.plugins" type="flreconstruct::section"]
plugins : string[1] = "MyModule"
MyModule.directory : string = "."
# - Pipeline configuration
# Must define "pipeline" as this is the module flreconstruct will use
# Make it use our custom module by setting the 'type' key to its typename
# and leave out the required "message" parameter to illustrate error reporting
[name="pipeline" type="MyModule"]

then flreconstruct will error out and tell us what happened:

$ flreconstruct -i MyModuleTest.brio -p ../MyModule/MyModuleMissingParam.conf
[notice:void datatools::library_loader::_init():467] Automatic loading of library 'MyModule'...
[fatal:falaise::exit_code FLReconstruct::do_pipeline(const FLReconstruct::FLReconstructParams &):156] Failed to initialize pipeline : initialization of module 'pipeline' (type 'MyModule') failed with exception:
- missing_key_error: property_set does not hold a key 'message'
- config:
`-- <no property>
$

Equally, if we supply the parameter but it has the wrong type:

[name="pipeline" type="MyModule"]
message : integer = 42

then a similar error would be reported:

$ flreconstruct -i MyModuleTest.brio -p ../MyModule/MyModuleWrongType.conf
[notice:void datatools::library_loader::_init():467] Automatic loading of library 'MyModule'...
[fatal:falaise::exit_code FLReconstruct::do_pipeline(const FLReconstruct::FLReconstructParams &):156] Failed to initialize pipeline : initialization of module 'pipeline' (type 'MyModule') failedwith exception:
- wrong_type_error: value at 'message' is not of requested type
- config:
`-- Name : 'message'
|-- Type : integer (scalar)
`-- Value : 42
$

Additional methods for configuration and validation are covered in the following section.

Best Practices for Module Configuration

Whilst the ability to make modules configurable is extremely useful, you should aim to minimize the number of parameters your module takes. This helps to make the module easier to use and less error prone. Remember that the modular structure of the pipeline means that tasks are broken down into smaller chunks, so you should consider refactoring complex modules into smaller orthogonal units.

An important restriction on configurable parameters is that they can only be of types understood by falaise::config::property_set and the underlying datatools::properties configuration language.

C++ Type property_set accessor properties script syntax
std::string auto x = ps.get<std::string>("key"); key : string = "hello"
int auto x = ps.get<int>("key"); key : integer = 42
double auto x = ps.get<double>("key"); key : real = 3.14
bool auto x = ps.get<bool>("key"); key : boolean = true
std::vector<std::string> auto x = ps.get<std::vector<std::string>>("key"); key : string[2] = "hello" "world"
std::vector<int> auto x = ps.get<std::vector<int>>("key"); key : int[2] = 1 2
std::vector<double> auto x = ps.get<std::vector<double>>("key"); key : real[2] = 3.14 4.13
std::vector<bool> auto x = ps.get<std::vector<bool>>("key"); key : bool[2] = true false
falaise::config::path auto x = ps.get<falaise::config::path>("key"); key : string as path = "/tmp/foo"
falaise::config::quantity_t auto x = ps.get<falaise::config::length_t>("key"); key : real as length = 3.14 mm
falaise::config::property_set auto x = ps.get<falaise::config::property_set>("key"); see below

The last item handles the case of nested configurations, for example

[name="nested" type="NestedModule"]
a.x : int = 1
a.y : int = 3
b.x : int = 2
b.y : int = 4

The keys can be extracted individually from the resultant falaise::config::property_set, e.g.

auto x = ps.get<int>("a.x");

However, nested configurations typically imply structured data, with periods indicating the nesting level. Each level can be extracted into its own set of properties, e.g.

auto a = ps.get<falaise::config::property_set>("a"); // a now holds key-values x=1, y=3
auto b = ps.get<falaise::config::property_set>("b"); // b now holds key-values x=2, y=4

with subsequent handling as required. A restriction on nesting is that it cannot support configurations such as

[name="nested" type="BadlyNested"]
a : int = 1
a.x : real = 3.14

as the key "a" is ambiguous. You should not use this form in any case as it generally indicates bad design.

When using falaise::config::property_set, you have several methods to validate the configuration supplied to your module. By validation, we mean checking the configuration supplies:

  1. The required parameters...
  2. ... of the correct type ...
  3. .. in the correct value range

All configuration and validation must be handled in the module's constructor, with exceptions thrown if an validation check fails. The first two checks can be handled automatically by falaise::config::property_set through its get member functions.

Parameters may be required, i.e. there is no sensible default, or optional, i.e. where we may wish to adjust the default. A required parameter is validated for existence and correct type by the single parameter get member function, e.g.

class MyModule {
public:
: message( ps.get<std::string>("message") )
{}
// other code omitted
private:
std::string message;
};

If the ps instance does not hold a parameter "message", or holds it with a type other than std::string, then an exception is thrown and will be handled automatically by flreconstruct.

An optional parameter is validated in the same way, but we use the two parameter form of get, e.g:

class MyModule {
public:
: myparam( ps.get<int>("myparam", 42) )
{}
// other code omitted
private:
int myparam;
};

Here, if the ps instance does not hold a parameter "myparam" then the myparam data member will be initialized to 42. If ps holds parameter "myparam" of type int then myparam will be set to its value. If ps holds parameter "myparam" and it is not of type int, then an exception is thrown (and handled by flreconstruct as above). Both forms are particularly useful for parameters that supply physical quantities such as lengths. See the documentation on Falaise's System of Units for further information on their use to assist with dimensional and scaling correctness.

Additional validation tasks such as bounds checking must be handled manually, and generally within the body of the module's constructor. For example, if we have a required integer parameter that must be even, we could validate this via:

class MyModule {
public:
: myparam( ps.get<int>("myparam") )
{
if(myparam%2 != 0) {
throw std::out_of_range{"value for 'myparam' parameter is not even"};
}
}
// other code omitted
private:
int myparam;
};

You should prefer to initialize parameter values in the constructor's initializer list, with further validation, if required, in the constructor body. Errors must be handled by throwing an exception derived from std::exception.

Using Additional Libraries in Your Module

The flreconstruct program provides the needed libraries to run core modules, specifically the minimal set:

  • Falaise
  • Bayeux
  • Boost
    • filesystem
    • system
    • serialization
    • iostreams
    • regex
  • GSL
  • CLHEP
  • ROOT
    • Core
    • RIO
    • Hist
    • MathCore
    • Matrix
    • Net
    • Tree
    • Thread
  • Qt5 QtCore

Linking your module to the Falaise::FalaiseModule target in target_link_libraries ensures that your module uses the appropriate headers at compile time, and the correct symbols at runtime. If your module requires use of additional libraries, then you will need to get CMake to locate these and then link them to your module.

In the most common case of using non-core libraries from the ROOT package, then the find_package step would be modified to:

# Find Falaise first, which ensures we use correct ROOT
find_package(Falaise REQUIRED)
# Find ROOT after Falaise, which guarantees use of same ROOT, but configure extra components
# in this case, TMVA.
find_package(ROOT REQUIRED TMVA)

The module can then be linked to the additional library by adding it in the target_link_libraries command:

target_link_libraries(MyModule PUBLIC Falaise::FalaiseModule ${ROOT_TMVA_LIBRARY})

For other packages, find_package followed by target_link_libraries can be used in the same fashion.

Next Steps

The above examples have illustrated the basic structures needed to implement a module and load it into flreconstruct.

Practical modules will access the event object passed to them, process it and then write information back into the event record. Using the event data model in modules is covered in a dedicated tutorial.

Modules may also need access to global data such as run conditions. FLReconstruct uses the concept of "Services" to provide such data, and a tutorial on using services in modules is provided.

Modules should also always be documented so that users have a clear picture of the task performed by the module and its configurable parameters. A tutorial on documenting modules using the builtin Falaise/Bayeux tools is available.

Though modules for FLReconstruct may not be directly integrated in Falaise, for consistency and maintanability their code should use the Falaise coding standards