1 |
\documentclass{article} |
2 |
|
3 |
\title{PamCut Developer's Guide} |
4 |
\author{Nicola Mori} |
5 |
|
6 |
\begin{document} |
7 |
\maketitle |
8 |
\tableofcontents |
9 |
\section{The philosophy} |
10 |
|
11 |
{\bf PamCut} is an abstract class, defining the interface for a cut object used |
12 |
in the analysis of PAMELA data. The main idea beyond the development is that a |
13 |
cut object must be capable of saying if an event is good or not, ie., if it |
14 |
satisfies some selection criteria. This criteria have to be implemented from |
15 |
scratch every time a specific cut is needed, by defining a concrete class which |
16 |
inherits form {\bf PamCut} and provides an implementation for the pure virtual |
17 |
method {\bf Check}. |
18 |
A special derived class is {\bf PamCutCollection}. This is a sort of a |
19 |
container for basic cuts, and is considered as a single cut. Its {\bf Check} |
20 |
implementation simply invokes all the {\bf Check}s of the single cuts, and it |
21 |
is successful if all the single cuts are succesful. |
22 |
{\bf PamCut} also provides the interface for two methods for post-processing: |
23 |
{\bf OnGood} and {\bf OnBad}. In derived classes these can contain specific |
24 |
tasks to be performed whenever an event satisfy the {\bf Check} condition or |
25 |
not. The method {\bf ApplyCut} takes care of invoking {\bf Check} and |
26 |
subsequently calls {\bf OnGood} or {\bf OnBad} according to the result of {\bf Check}. |
27 |
Summarizing, {\bf Check}ing an event simply means to ask the object if the |
28 |
event satisfy the selection criteria; applying a cut means to check and then |
29 |
perform post-selection tasks. The cut can be applied to a bunch of events by |
30 |
means of the {\bf Process} method. |
31 |
|
32 |
\subsection{More on collections} |
33 |
A collection is an object which inherits from {\bf PamCutCollection}, which in |
34 |
turn inherits from {\bf PamCut}. So a collection is a cut itself, meaning that |
35 |
its interface contains a {\bf Check} method, an {\bf ApplyCut} method and so |
36 |
on. Logically, this is in agreement with the fact that a bunch of cuts can be |
37 |
thought of as a single cut whose result is the logical AND of all the basic |
38 |
cuts. More specifically, the implementation chosen for {\bf PamCutCollection} |
39 |
methods consists in simply performing a cyclic call of the corresponding |
40 |
methods in basic cuts. So {\bf PamCutCollection::Check} will subsequently call |
41 |
all the {\bf Check} methods of the basic cuts it contains, and {\bf |
42 |
PamCutCollection::ApplyCut} will call the basic {\bf ApplyCut}s. This last |
43 |
feature deserves some more words. When the collection calls a basic {\bf |
44 |
ApplyCut}, the basic cut will also perform its specific post-selection tasks as |
45 |
defined in its implementations of {\bf OnGood} and {\bf OnBad}. If all the |
46 |
basic cuts result to be satisfied, then {\bf PamCutCollection::ApplyCut} will |
47 |
call {\bf PamCutCollection::OnGood}, allowing for the execution of a |
48 |
post-processing task which occurs only if all the basic selection criteria are |
49 |
satisfied. Indeed, as said above a collection is a cut, so it behaves exactly |
50 |
like a cut: when you apply it, you will also do the apprpriate post-processing. |
51 |
We have then two levels of post-processing: the first is triggered by success |
52 |
of a basic cut, the second by the success of the whole sequence of basic cuts. |
53 |
This modular behaviour achieved with collections allow for a definition of a |
54 |
hierarchy of cuts: a collection can contain other collections, which in turn |
55 |
can contain other collections or basic cuts, in a tree-like hierarchy. Each |
56 |
level has its own post-selection tasks, allowing for a fine control of the |
57 |
post-processing procedure. |
58 |
|
59 |
To perform some tests it could be useful to have a collection which applies |
60 |
the whole bunch cuts to all the events, regardless if some cuts are not |
61 |
satisfied for a specific event ({\bf PamCutCollection} stop the evaluation for |
62 |
the current event as soon as a cut is not satisfied). This is achieved with the |
63 |
{\bf BlindCutCollection} class, which blindly checks all the cuts for all the |
64 |
events. This will lead to a call to {\bf OnGood} or {\bf OnBad} for all the |
65 |
cuts for each event; the collection will the call its own {\bf OnGood} or {\bf |
66 |
OnBad} if all the cuts have been satisfied or not, much like in |
67 |
{\bf PamCutCollection}. See the Doxygen html documentation for more info about |
68 |
{\bf BlindCutCollection} and other specific cut implementations. |
69 |
|
70 |
\section{Actions and the SmartCollection} |
71 |
When performing an analysis, each time an event is selected as good some |
72 |
actions are likely to be performed, like filling histograms or writing a |
73 |
report. To automate these tasks, the class {\bf CollectionAction} has been |
74 |
designed. A {\bf CollectionAction} object has a {\bf Setup} method which |
75 |
may contain the initialization procedures, like reading parameters from a file. |
76 |
Also the constructor can contain some initializations. The finalization, like |
77 |
writing histograms on a file, is in the {\bf Finalize} method. The specific |
78 |
actions for good and bad events are to be defined in the {\bf OnGood} and {\bf |
79 |
OnBad} methods, much like in collections. {\bf CollecionAction} is an abstract |
80 |
class, which does nothing but defining the interface. Its concrete |
81 |
implementations will be called actions.\\ |
82 |
Actions are automatically handled by the {\bf SmartCollection} class. It |
83 |
inherits from {PamCutCollection}, and contains a vector of {\bf |
84 |
CollectionAction} objects. These actions can be added using {\bf |
85 |
SmartCollection::AddAction}; for each of them, the collection will take care of |
86 |
calling {\bf Setup} at the beginning of the analysis, {\bf OnGood} and {\bf |
87 |
OnBad} for every event (depending on the selection result), and {\bf Finalize} |
88 |
at the end of the analysis. In all other aspects, it behaves exactly as {\bf |
89 |
PamCutCollection}.\\ |
90 |
Loosely speaking, after defining an action one simply has to instantiate it, |
91 |
add it to a {\bf SmartCollection} and launch the analysis (fire and |
92 |
forget\ldots). |
93 |
|
94 |
\section{The software organization} |
95 |
The software is organized in a tree of directories. The idea is that each node |
96 |
of this tree must provide the necessary informations about the sub-branches. In |
97 |
each directory, a header file will contain \verb1#include1 directives for all |
98 |
header files in the sub-directories. This way, it is sufficient to include the |
99 |
top-level header in the analysis software to automatically include all the |
100 |
headers of the project. This top-level header is {\it PamCutEnv.h}. Each time a |
101 |
sub-directory is created, the headers it contains must be included in the |
102 |
parent node's header. |
103 |
|
104 |
\subsection{The makefile organization} |
105 |
The instructions to build the software library are encoded in the standard {\bf |
106 |
make} way, following the same tree structure of the software. Each node |
107 |
directory contains an {\it include.mk} file, which includes all the |
108 |
{include.mk} in the sub-directories. This chain of inclusions terminates on the |
109 |
leaves, which most of the times contain the definition and implementation of |
110 |
the classes. Each leaf directory must provide build instructions for the files |
111 |
it contains, in a file called {\it subdir.mk}; this file has to be included in |
112 |
the upper {\it include.mk}. The top level {\it include.mk} is included by |
113 |
the {\it makefile}.\\ |
114 |
This way, each directory takes care of its sub-directories, letting a quick and |
115 |
easy addition of the latters. |
116 |
|
117 |
\section{The software architecture} |
118 |
The software is organized in a tree of directories. The root folder contains a |
119 |
{\it PamCutBase} folder, in which the definitions of the base classes ({\bf |
120 |
PamCut} and {\bf PamCutCollection}) are stored, and the general headers {\it |
121 |
PamCutEnv.h} and {\it CommonDefs.h}. {\it PamCutEnv.h} contains the inclusion |
122 |
of all the headers of the framework, allowing for the use of the entire |
123 |
software with only one \verb1#include1 in the code; {\it CommonDefs.h} contains |
124 |
all the definitions which are relevant for the entire framework. |
125 |
To develop a specific cut one has to define a derived class from {\bf PamCut}, |
126 |
which at least must provide a concrete implementation of {\bf Check}. One can |
127 |
also define new versions of {\bf OnGood} and {\bf OnBad} for specific |
128 |
post-selection tasks (in the base class these methods do nothing). Be very |
129 |
careful if you decide to redefine {\bf ApplyCut}: remeber that the interface |
130 |
requires them to call the post-selection routines. |
131 |
A good rule for keeping the software distribution clean is to use a different |
132 |
folder for each cut definition, named as the cut itself. When you define a new |
133 |
cut, create a folder named as the cut inside an adequate parent directory and |
134 |
place the {\it .h} and {\it .cpp} files inside it. To speed up development, you may copy |
135 |
and paste an existing cut and then modify it.\\ |
136 |
In the same way, actions can be defined. |
137 |
|
138 |
\section{How to define a cut} |
139 |
As said above, to define a cut (let's name it {\bf DummyCut}) it is suggested |
140 |
to create a folder {\it DummyCut} inside, eg., a {\bf DummyDir} directory |
141 |
inside the root folder, and to create inside it two files: {\it DummyCut.h}, |
142 |
which will contain all the declarations, and {\it DummyCut.cpp}, where the |
143 |
implementation will be coded. A typical structure for {\it DummyCut.h} would be: |
144 |
|
145 |
\begin{verbatim} |
146 |
#ifndef DUMMYCUT_H_ |
147 |
#define DUMMYCUT_H_ |
148 |
|
149 |
/*! @file DummyCut.h The DummyCut class declaration file */ |
150 |
|
151 |
#include "../../PamCutBase/PamCutBase.h" |
152 |
|
153 |
/*! @brief An example cut. */ |
154 |
class DummyCut: public PamCut { |
155 |
public: |
156 |
|
157 |
/*! @brief Constructor. */ |
158 |
Dummy(char *cutName): |
159 |
PamCut(cutName) { |
160 |
} |
161 |
|
162 |
/*! @brief Destructor. */ |
163 |
~DummyCut() { |
164 |
} |
165 |
|
166 |
/*! @brief The dummy cut check |
167 |
* |
168 |
* This routine checks the event using the dummy cut criteria. |
169 |
* |
170 |
* @param event The event to analyze. |
171 |
* @return CUTOK if the dummy cut is satisfied |
172 |
*/ |
173 |
int Check(PamLevel2 *event); |
174 |
|
175 |
}; |
176 |
|
177 |
#endif /* DUMMYCUT_H_ */ |
178 |
|
179 |
\end{verbatim} |
180 |
|
181 |
Note the inclusion of {\it PamCutBase.h}: this is essential to let {\bf |
182 |
DummyClass} know about its parent class, which in this case is {\bf PamCut}. |
183 |
Another thing to care about is the multiple inclusion protection, ie., the |
184 |
preprocessor directives: |
185 |
\begin{verbatim} |
186 |
#ifndef DUMMYCUT_H_ |
187 |
#define DUMMYCUT_H_ |
188 |
. |
189 |
. |
190 |
. |
191 |
#endif /* DUMMYCUT_H_ */ |
192 |
\end{verbatim} |
193 |
This provide protection against multiple inclusion of a header in a single |
194 |
compilation unit due to circular inclusion; in practice, it will save the |
195 |
developer a lot of double definition errors from the compiler. Be sure that all |
196 |
your headers are encapsulated in such a structure, and that for each header |
197 |
there is a different name after \verb1#ifndef1 (which must match the successive |
198 |
\verb1#define1); otherwise, the compiler could skip the inclusion of some |
199 |
headers. The convention adopted in {\bf PamCut} for such tags is: |
200 |
\verb1<Filename in uppercase>_H_1, which provides univocal tags for each header. |
201 |
|
202 |
The essential redeclarations are: the constructor, the desctructor and {\bf |
203 |
Check}. Constructor deserves some observations: since constructors are not |
204 |
inherited, each class must have its own. However, if a class has a parent |
205 |
class, it is necessary to initialize also the parent class. If the parent class |
206 |
has a default constructor (ie., a constructor without arguments) the compiler |
207 |
will automatically call it and the user has nothing to worry about. However, |
208 |
{\bf PamCut} has no default constructor, so in every derived class there must |
209 |
be an explicit call to the {\bf PamCut} constructor. This is what is done in |
210 |
the lines: |
211 |
|
212 |
\begin{verbatim} |
213 |
Dummy(char *cutName): |
214 |
PamCut(cutName) { |
215 |
} |
216 |
\end{verbatim} |
217 |
{\bf PamCut} objects have a name, internally stored in the {\bf \_cutName}; so |
218 |
our {\bf DummyClass}, since it inherits from {\bf PamCut}, must also have a |
219 |
name. Its constructor, indeed, requires a string containing the cut's name. But |
220 |
in {\bf DummyCut} there's no local variable to store it, and there's no need to |
221 |
define one since it is already present in {\bf PamCut}. The only thing to do is |
222 |
to initialize it exactly as {\bf PamCut} would do: this is the purpose of |
223 |
invoking \verb1PamCut(cutName)1. As an aside, what follows the colon after the |
224 |
{\bf DummyCut} constructor declaration is called \emph{initialization list}: |
225 |
here one can (and must) call the constructors for all the objects and variables |
226 |
contained in the class. These are executed before the class constructor's body, |
227 |
so that one can safely assume that at construction time all the internal |
228 |
objects and variables are properly initialized. |
229 |
|
230 |
To recap and generalize, when you write a derived class you must always call |
231 |
the constructor of its parent class, which will take care about initializing |
232 |
the ``parent core'' of the derived class. |
233 |
|
234 |
One can also redefine {\bf OnGood}, {\bf OnBad}, add new methods, variables and |
235 |
so on. If you plan to build a general purpose cut who could be used by |
236 |
different people please take the time to add the appropriate documentation to |
237 |
the code. In the example, Doxygen comment lines (beginning with \verb1/*!1) |
238 |
are shown; an on-line guide for Doxygen can be found at: |
239 |
\newline |
240 |
\newline |
241 |
\verb1 http://www.stack.nl/~dimitri/doxygen/manual.html1 |
242 |
|
243 |
\vspace{.5cm} |
244 |
Once the header has been prepared, it's time to implement the cut into {\it |
245 |
DummyCut.cpp}: |
246 |
|
247 |
\begin{verbatim} |
248 |
/*! @file DummyCut.cpp The DummyCut class implementation file */ |
249 |
|
250 |
#include "DummyCut.h" |
251 |
|
252 |
int DummyCut::Check(PamLevel2 * event) { |
253 |
|
254 |
if (<Some condition about the event>) |
255 |
return CUTOK; |
256 |
else |
257 |
return 0; |
258 |
} |
259 |
\end{verbatim} |
260 |
|
261 |
In this very simple implementation a basic feature is noticeable: the interface |
262 |
requires that whenever an event satisfy the selection criterion, {\bf Check} |
263 |
must return the {\bf CUTOK} value. This value is defined in {\it CommonDefs.h}. |
264 |
The return value for a discarded event is implementation-specific, and can be |
265 |
anything but {\bf CUTOK}. This return value could take different values |
266 |
depending on the reason why the event has been discarded: for example, it can |
267 |
be a number associated to a specific detector whose data is missing in a data |
268 |
quality cut. It is then passed to {\bf OnBad}, which can perform some task |
269 |
depending on the specific reason of the cut failure (eg., on which detector has |
270 |
no data). |
271 |
Remember also to include the header file. |
272 |
|
273 |
\section{How to setup the cut's build} \label{sec:build} |
274 |
Once a cut has been declared and implemented, the makefiles have to be adjusted |
275 |
to include it in the cut library. The {\it makefile} provided with the software |
276 |
will build a library called {\it libPamCut.so}, which can be then linked to a |
277 |
specific analysis code. It is based on a submakefile structure. Each folder |
278 |
containing a cut must also include one of these submakefiles (named {\it |
279 |
subdir.mk} by convention), who instructs the main {\it makefile} on how to build |
280 |
the newly added cut. An example is: |
281 |
|
282 |
\begin{verbatim} |
283 |
# Choose a name for the object file. This must coincide with the |
284 |
# .cpp and .h filename, except for the extension which has to |
285 |
# be .o |
286 |
OBJS += ./DummyDir/DummyCut/DummyCut.o |
287 |
|
288 |
# Dependencies file. The extension must be .d, and the name equal |
289 |
# to the cut name. |
290 |
CPP_DEPS += ./DummyDir/DummyCut/DummyCut.d |
291 |
|
292 |
# Rules for compilation. You will likely have only to specify |
293 |
# the path. Put the directory path: |
294 |
# here here |
295 |
./DummyDir/DummyCut/%.o: ./DummyDir/DummyCut/%.cpp |
296 |
@echo 'Building file: $<' |
297 |
@echo 'Invoking: GCC C++ Compiler' |
298 |
$(C++) -I${ROOTSYS}/include -I${PAM_INC} -I${PAM_INC}/yoda \ |
299 |
-Wall -c -MMD -MP -MF"$(@:%.o=%.d)" -MT"$(@:%.o=%.d)"\ |
300 |
-o"$@" "$<" |
301 |
@echo 'Finished building: $<' |
302 |
@echo ' ' |
303 |
|
304 |
\end{verbatim} |
305 |
|
306 |
Existing files can be used as a template. The first thing you have to modify |
307 |
is the object name (\verb1OBJS1 variable): be careful to append the object name |
308 |
to \verb1OBJS1 using \verb1+=1 instead of overwriting it with \verb1=1. The |
309 |
paths in this file will be relative to the root directory where the {\it |
310 |
makefile} is, eg., \verb1./1 is not the {\it DummyCut} directory where {\it |
311 |
subdir.mk} is, but the root directory which contains the {\it makefile}. The |
312 |
object file ({\it DummyCut.o}) must be named as the cut and the directory that |
313 |
contains the cut. The \verb1CPP_DEPS1 variable must be similarly modified. |
314 |
This variable contains a list of dependency files, which contains all the |
315 |
external headers (eg., PAMELA and ROOT headers) {\bf PamCut} depends on. These |
316 |
lists are automatically generated, and allows {\it make} to rebuild {\bf |
317 |
PamCut} whenever one of these headers are modified. Finally, one has also to |
318 |
put the directory name in the target line which precedes the build command, |
319 |
just below \verb1here1. |
320 |
|
321 |
After creating the {\it subdir.mk}, it must be included in the {\it include.mk} |
322 |
in the parent directory. It looks like this: |
323 |
|
324 |
\begin{verbatim} |
325 |
# Include here the submakefiles for each cut |
326 |
|
327 |
# Cuts |
328 |
-include DummyDir/DummyCut/subdir.mk |
329 |
. |
330 |
. |
331 |
. |
332 |
\end{verbatim} |
333 |
|
334 |
|
335 |
Remember to write paths as relative to the root directory. Going backward along |
336 |
the chain of inclusions leads to the {\it makefile}: |
337 |
|
338 |
\begin{verbatim} |
339 |
# ------------------------ Build options ----------------------# |
340 |
|
341 |
# C++ compiler and linker |
342 |
C++ = g++ |
343 |
|
344 |
# Optimization flags. |
345 |
OPTIMIZE = -g3 #-DDEBUGPAMCUT |
346 |
|
347 |
# Library flags |
348 |
EXCLUSIONFLAGS = #-DNO_TOFNUCLEI -DNO_CALONUCLEI -DNO_TRKNUCLEI |
349 |
|
350 |
COMMONDEPS = makefile |
351 |
|
352 |
-include include.mk |
353 |
|
354 |
#------------------------ Make body ---------------------------# |
355 |
# |
356 |
# Below the make commands are defined. There are no options to |
357 |
# set in this section, so it has to be modified only in case of |
358 |
# radical changes to the make procedure. |
359 |
|
360 |
# Remove command for clean |
361 |
RM := rm -rf |
362 |
|
363 |
# Additional dependencies from headers of other software |
364 |
# (PAMELA, ROOT) |
365 |
ifneq ($(MAKECMDGOALS),clean) |
366 |
ifneq ($(strip $(CPP_DEPS)),) |
367 |
-include $(CPP_DEPS) |
368 |
endif |
369 |
endif |
370 |
|
371 |
# All Target |
372 |
all: version libPamCut.so |
373 |
|
374 |
# Tool invocations |
375 |
libPamCut.so: $(OBJS) $(USER_OBJS) |
376 |
@echo 'Building target: $@' |
377 |
@echo 'Invoking: GCC C++ Linker' |
378 |
$(C++) -shared -o"libPamCut.so" $(OBJS) |
379 |
@echo 'Finished building target: $@' |
380 |
@echo ' ' |
381 |
|
382 |
# Other Targets |
383 |
clean: |
384 |
-$(RM) $(CPP_DEPS) $(OBJS) libPamCut.so |
385 |
-@echo ' ' |
386 |
|
387 |
version: |
388 |
@gcc --version | grep gcc; echo |
389 |
|
390 |
.PHONY: all clean dependents version |
391 |
\end{verbatim} |
392 |
|
393 |
The Build options section is the one that will most likely need a modification |
394 |
if one wants to tweak the compilation; the make body will most of the times be |
395 |
good as it is. The {\it makefile} contains comments to explain how to set the |
396 |
various options.\\ |
397 |
The first option to set is the compiler: g++ will work on |
398 |
almost all Linux systems. Next is the optimization level, which could be set to |
399 |
one of the proposed values according to the specific necessities. The {\it |
400 |
makefile} comments contains also instructions on how to enable debug sections |
401 |
in the code.\\ |
402 |
Since the PAMELA software is modular, in some setups it may lack some libraries |
403 |
needed by some cuts. When designing such a cut it is a good habit to setup |
404 |
its eventual exclusion from the library. This can be efficiently done using |
405 |
the compiler directive \verb1#ifdef1. For example, encapsulating all the code |
406 |
in the header and implementation files of {\bf DummyCut} in a structure like |
407 |
this: |
408 |
|
409 |
\begin{verbatim} |
410 |
#ifdef NO_DUMMYCUT |
411 |
. |
412 |
. |
413 |
. |
414 |
#endif |
415 |
\end{verbatim} |
416 |
|
417 |
\noindent will completely exclude {\bf DummyCut} from the environment if the |
418 |
flag \verb1NO_DUMMYCUT1 is defined. It can be done passing the parameter |
419 |
\verb1-DNO_DUMMYCUT1 to the compiler. These exclusion flags can be defined in |
420 |
the \verb1EXCLUSIONFLAGS1 variable. A concrete example is given by |
421 |
{\bf TofNucleiZCut}, which requires the {\bf ToFNuclei} library: look at it to |
422 |
see how its exclusion from the code is implemented, allowing to build and use |
423 |
{\bf PamCut} in those environments that do not include {\bf ToFNuclei}. In the |
424 |
end, this is very similar to what is done with the debug sections.\\ |
425 |
The \verb1COMMONDEPS1 flag contains the files which, if modified, will trigger |
426 |
a complete rebuild of the library. For example, if you change the {\it |
427 |
makefile} by modifying an optimization option all the modules should be rebuilt |
428 |
so that the whole library will have the same level of optimization. That's why |
429 |
{\it makefile} is in \verb1COMMONDEPS1. Add all the other files that should |
430 |
behave like this.\\ |
431 |
|
432 |
If you need some extra modifications to the building system you need to know |
433 |
more about {\it make}; an online guide is at: |
434 |
\newline |
435 |
\verb1http://www.linuxtopia.org/online_books/programming_tool_guides/1 |
436 |
\newline |
437 |
\verb1 gnu_make_user_guide/make.html#SEC_Top1 |
438 |
|
439 |
|
440 |
\section{How to define an action} |
441 |
Defining an action is very similar to defining a cut. An example header: |
442 |
|
443 |
\begin{verbatim} |
444 |
#ifndef DUMMYACTION_H_ |
445 |
#define DUMMYACTION_H_ |
446 |
|
447 |
#include "../CollectionAction/CollectionAction.h" |
448 |
|
449 |
/*! @brief A dummy action definition. */ |
450 |
class DummyAction: public CollectionAction { |
451 |
|
452 |
public: |
453 |
/*! @brief Constructor. |
454 |
* |
455 |
* @param actionName The action's name. |
456 |
*/ |
457 |
DummyAction(const char *actionName): |
458 |
CollectionAction(actionName){} |
459 |
|
460 |
/*! @brief Destructor */ |
461 |
~DummyAction() { |
462 |
} |
463 |
|
464 |
/*! @brief The setup procedure. |
465 |
* |
466 |
* @param events The events pointer. |
467 |
*/ |
468 |
void Setup(PamLevel2 *events); |
469 |
|
470 |
/*! @brief The OnGood procedure. |
471 |
* |
472 |
* @param event The selected event. |
473 |
*/ |
474 |
void OnGood(PamLevel2 *event); |
475 |
|
476 |
/*! @brief Writes the tree of saved events to the output file. */ |
477 |
void Finalize(); |
478 |
}; |
479 |
|
480 |
#endif /* DUMMYACTION_H_ */ |
481 |
|
482 |
\end{verbatim} |
483 |
|
484 |
The {\bf DummyAction} declaration above is a good example. The action classes |
485 |
must inherit from {\bf CollectionAction}, and their constructor have to call |
486 |
the constructor of the ancestor class in the initialization list, similarly to |
487 |
what happens for cuts. Then some of the base class' methods are overridden, |
488 |
specifically {\bf Setup}, {\bf OnGood} and {\bf Finalize}. The last two methods |
489 |
have to be overridden, since they are pure virtual (an action is supposed to do |
490 |
something for good events and something to end the analysis, so making these |
491 |
methods pure virtual is a way to enforce a definition in the derived classes). |
492 |
Conversely, {\bf Setup} and {\bf OnBad} are concrete in the base class, with |
493 |
void implementation: since not all actions would need a setup (it can be done |
494 |
also in the constructor) or a procedure for bad events, this implementation |
495 |
allows to not define them in derived classes. Obviously, the re-declared |
496 |
methods in the header have to have a re-definition in a .cpp file, exactly as |
497 |
for the cuts. |
498 |
|
499 |
\section{How to setup the actions's build} |
500 |
This topic is very similar to that explained in sec. \ref{sec:build}, so it |
501 |
should be straightforward. However, look at the provided concrete |
502 |
implementations of actions if you need some example to setup your build. |
503 |
|
504 |
\section{How to build and use the library} |
505 |
If the makefiles are correctly set up, the only remaining thing is to type |
506 |
\verb1make all1. Remember to set the PAMELA environment with the set\_pam\_env |
507 |
script BEFORE invoking \verb1make1. This will generate a {\it libPamCut.so} file |
508 |
which will contain all the cuts. To clean the project and build from scratch |
509 |
type \verb1make clean all1. To use the library in an analysis code the |
510 |
environment header must be included in the code: |
511 |
\verb1#include "<root PamCutdirectory>/PamCutEnv.h"1. With this, all the |
512 |
classes and common definitions will be accessible. A typical usage of {\bf |
513 |
PamCut} inside the analysis code would look like: |
514 |
|
515 |
\begin{verbatim} |
516 |
|
517 |
PamCutCollection collection; |
518 |
|
519 |
DummyCut1 dummy1; |
520 |
collection.AddCut(dummy1); |
521 |
DummyCut2 dummy2(param); |
522 |
collection.AddCut(dummy2); |
523 |
|
524 |
collection.Process(event, 0, event->GetEntries()-1); |
525 |
|
526 |
\end{verbatim} |
527 |
|
528 |
In the simple example above a \verb1DummyCut11 and a \verb1DummyCut21 object |
529 |
(which requires some sort of parameter) are instantiated. They are added to |
530 |
\verb1collection1 which takes care of applying them to all the events. |
531 |
|
532 |
When the analysis code is compiled the linker must be aware that it |
533 |
needs a library called {\it libPamCut.so} and where to find it. In the {\it |
534 |
makefile} which builds the analysis program the following option must be added |
535 |
to the linker invocation: |
536 |
\newline |
537 |
\verb1-L<root PamCut directory> -lPamCut1. |
538 |
|
539 |
One could also wish to move {\it libPamCut.so} to another directory: this path |
540 |
must then replace what is indicated as \verb1<root PamCut directory>1 above. |
541 |
|
542 |
Finally, when the analysis code is compiled and linked against libPamCut.so, to |
543 |
launch it it's necessary to tell the environment where the library is, so that |
544 |
the program can dynamically access it at runtime. This information is encoded |
545 |
in the environment variable LD\_LIBRARY\_PATH, which contains the paths of the |
546 |
accessible libraries. If libPamCut.so is still in the root PamCut directory one |
547 |
can type: |
548 |
\newline |
549 |
\verb1export LD_LIBRARY_PATH=<root PamCut directory>:$LD_LIBRARY_PATH1 |
550 |
\newline |
551 |
This has to be done every time you open a shell; one way to avoid this is to |
552 |
append the above line at the end of your set\_pam\_env script, so that it will |
553 |
be automatically executed every time you set your PAMELA environment. |
554 |
|
555 |
\section{Usage summary} |
556 |
Here's a short summary on how to develop a cut, build and use it. If one only |
557 |
\begin{enumerate} |
558 |
\item Obtain the code (from tarball, repository\ldots) and go in the root |
559 |
code directory. |
560 |
\item Check if the \verb1C++1 option in the Build section of {\it makefile} |
561 |
is correctly set with the C++ compiler name present in your system (for many |
562 |
Linux platform, \verb1g++1 is a safe choice). |
563 |
\item Create a directory named as the cut class you want to develop. |
564 |
\item Place inside the newly created directory a {\it .h} file and a {\it |
565 |
.cpp} file, named as the direcory; edit the files to define and implement the |
566 |
class (one can also cut and paste the files from an existing class and edit |
567 |
them), defining at least the constructor, the distructor and {\bf Check} for |
568 |
the new class. |
569 |
\item Create inside the directory a {\it subdir.mk} file which contains the |
570 |
instructions to build the directory content, as described in \ref{sec:build}; |
571 |
as usual, one can cut and paste from an existing class and then edit. |
572 |
\item Modify the {\it makefile} in the root code directory as in |
573 |
\ref{sec:build}, to include the newly developed cut. |
574 |
\item Modify the {\it PamCutEnv.h} file, adding the \verb1#include#1 for the |
575 |
new class header (see examples therein). |
576 |
\item Set the PAMELA environment with the set\_pam\_env script. |
577 |
\item Build the library typing \verb1make all1 (or \verb1make clean all1 to |
578 |
build from scratch); this will produce the library {\it libPamCut.so} in the |
579 |
root code directory, which will contain all the class definitions and |
580 |
implementations. |
581 |
\item Insert \verb1#include ``<root PamCut directory>/PamCutEnv.h#1 in the |
582 |
analysis code, to have access to all the classes in the library. |
583 |
\item Develop the analysis code |
584 |
\end{enumerate} |
585 |
|
586 |
\section{Some advices and suggestions} |
587 |
\begin{itemize} |
588 |
\item Derive your cuts. Try to define a new class every time you need a new |
589 |
cut, instead of modifying an existing one. As an example, you can define a |
590 |
cut with a specific implementation for {\bf Check}, then derive from it many |
591 |
classes which only redefine {\bf OnGood} and {\bf OnBad}. In this way, you |
592 |
can have several post-selection options associated to the same cut; if |
593 |
you'll ever need to modify the cut criteria, you will have to do it only in |
594 |
one place, saving time and reducing code errors opportunities. |
595 |
\item Be consistent with the existing code style. Everyone has its own |
596 |
code style, with its own conventions for naming variables and functions based |
597 |
on personal preferences. Maintaining a uniform code style is a good way to |
598 |
improve the code readability, so it's worth a little effort. The conventions |
599 |
chosen for {\bf PamCut} are: |
600 |
\begin{itemize} |
601 |
\item the names of private and protected members (variables and methods) |
602 |
always begin with an underscore, eg., \verb1_previousOBT1, \verb1_LT1; |
603 |
\item the names of variables usually begin with a lower case; for compound |
604 |
words, the successive initials are upper case, eg., \verb1time1, |
605 |
\verb1liveTime1, \verb1nHitPaddles1; |
606 |
\item the names of classes and methods begin with upper case, eg., |
607 |
\verb1PamCut1, \verb1GeoFieldCut1, \verb1ApplyCut()1. |
608 |
\end{itemize} |
609 |
Within these conventions, a code row like: |
610 |
\newline |
611 |
\verb1 GeoFieldCut geoFieldCut;1 |
612 |
\newline |
613 |
is easily interpreted as the instantiation of a class named |
614 |
\verb1GeoFieldCut1 into an object named \verb1geoFieldCut1. This allows to |
615 |
have objects whose names are almost identical to those of the respective |
616 |
classes, allowing a straightforward type recognition. Also, the discrimination between |
617 |
public and private variables and methods inside a class is immediate. |
618 |
\item Respect the interface. {\bf PamCut} has been designed following precise |
619 |
rules, which allow for a quite general environment that should cover many |
620 |
of the necessities related to data analysis. Try to fit your |
621 |
particular analysis in this scheme; this will result in a much more |
622 |
raedable code. However, someone may need features that are not compatible with |
623 |
the current interface. In this case, the first thing to do is to try a |
624 |
workaround that would leave the interface unchanged. As an example, if the |
625 |
automated post-selection tasks based on {\bf OnGood} and {\bf OnBad} does not |
626 |
satisfy you, you can call directly the {\bf Check} method inside a loop and |
627 |
then do a custom post-processing. In the worst case, the interface could |
628 |
result incompatible with the analysis needs: in this case a redesign |
629 |
hypotheses can be considered if the incompatibility is such that a large |
630 |
piece of the analysis would be compromised. Redesigning is always a tricky |
631 |
task, so it has to be considered as a last option. |
632 |
\item Take care about other people. If you plan to write a code that is |
633 |
likely to be used and modified by other people, please take your time to |
634 |
write the documentation. Documenting the code is a boring and time-consuming |
635 |
task, but can save you and your colleagues a lot of headaches and |
636 |
misunderstandings. The better a code is documented, the lesser are the |
637 |
questions other people will ask you. |
638 |
\end{itemize} |
639 |
|
640 |
|
641 |
\end{document} |