/[PAMELA software]/PamCut/doc/PamCutDevGuide.tex
ViewVC logotype

Contents of /PamCut/doc/PamCutDevGuide.tex

Parent Directory Parent Directory | Revision Log Revision Log


Revision 1.6 - (show annotations) (download) (as text)
Wed Oct 28 14:52:29 2009 UTC (15 years, 1 month ago) by pam-fi
Branch: MAIN
CVS Tags: Root_V8, MergedToHEAD_1, nuclei_reproc, MergedFromV8_1, BeforeMergingFromV8_1, V9, HEAD
Branch point for: V8
Changes since 1.5: +23 -7 lines
File MIME type: application/x-tex
Upgraded, web documentation URL added.

1 \documentclass{article}
2
3 \title{PamCut Developer's Guide}
4 \author{Nicola Mori}
5
6 \begin{document}
7 \maketitle
8 \tableofcontents
9 \section{The philosophy}
10
11 {\bf PamCut} is an abstract class, defining the interface for a cut object used
12 in the analysis of PAMELA data. The main idea beyond the development is that a
13 cut object must be capable of saying if an event is good or not, ie., if it
14 satisfies some selection criteria. This criteria have to be implemented from
15 scratch every time a specific cut is needed, by defining a concrete class which
16 inherits form {\bf PamCut} and provides an implementation for the pure virtual
17 method {\bf Check}.
18 A special derived class is {\bf PamCutCollection}. This is a sort of a
19 container for basic cuts, and is considered as a single cut. It has an {\bf
20 AddCut} method which will add a cut to the collection. Its {\bf Check}
21 implementation simply invokes all the {\bf Check}s of the single cuts, and it
22 is successful if all the single cuts are succesful. {\bf PamCut} also provides
23 the interface for two methods for post-processing: {\bf OnGood} and {\bf
24 OnBad}. In derived classes these can contain specific tasks to be performed
25 whenever an event satisfy the {\bf Check} condition or not. The method {\bf
26 ApplyCut} takes care of invoking {\bf Check} and subsequently calls {\bf
27 OnGood} or {\bf OnBad} according to the result of {\bf Check}. Summarizing,
28 {\bf Check}ing an event simply means to ask the object if the event satisfy
29 the selection criteria; applying a cut means to check and then perform
30 post-selection tasks. The cut can be applied to a bunch of events by means of
31 the {\bf Process} method.
32
33 \subsection{More on collections}
34 A collection is an object which inherits from {\bf PamCutCollection}, which in
35 turn inherits from {\bf PamCut}. So a collection is a cut itself, meaning that
36 its interface contains a {\bf Check} method, an {\bf ApplyCut} method and so
37 on. Logically, this is in agreement with the fact that a bunch of cuts can be
38 thought of as a single cut whose result is the logical AND of all the basic
39 cuts. More specifically, the implementation chosen for {\bf PamCutCollection}
40 methods consists in simply performing a cyclic call of the corresponding
41 methods in basic cuts. So {\bf PamCutCollection::Check} will subsequently call
42 all the {\bf Check} methods of the basic cuts it contains, and {\bf
43 PamCutCollection::ApplyCut} will call the basic {\bf ApplyCut}s. This last
44 feature deserves some more words. When the collection calls a basic {\bf
45 ApplyCut}, the basic cut will also perform its specific post-selection tasks as
46 defined in its implementations of {\bf OnGood} and {\bf OnBad}. If all the
47 basic cuts result to be satisfied, then {\bf PamCutCollection::ApplyCut} will
48 call {\bf PamCutCollection::OnGood}, allowing for the execution of a
49 post-processing task which occurs only if all the basic selection criteria are
50 satisfied. Indeed, as said above a collection is a cut, so it behaves exactly
51 like a cut: when you apply it, you will also do the apprpriate post-processing.
52 We have then two levels of post-processing: the first is triggered by success
53 of a basic cut, the second by the success of the whole sequence of basic cuts.
54 This modular behaviour achieved with collections allow for a definition of a
55 hierarchy of cuts: a collection can contain other collections, which in turn
56 can contain other collections or basic cuts, in a tree-like hierarchy. Each
57 level has its own post-selection tasks, allowing for a fine control of the
58 post-processing procedure.
59
60 To perform some tests it could be useful to have a collection which applies
61 the whole bunch cuts to all the events, regardless if some cuts are not
62 satisfied for a specific event ({\bf PamCutCollection} stop the evaluation for
63 the current event as soon as a cut is not satisfied). This is achieved with the
64 {\bf BlindCutCollection} class, which blindly checks all the cuts for all the
65 events. This will lead to a call to {\bf OnGood} or {\bf OnBad} for all the
66 cuts for each event; the collection will the call its own {\bf OnGood} or {\bf
67 OnBad} if all the cuts have been satisfied or not, much like in
68 {\bf PamCutCollection}. See the Doxygen html documentation for more info about
69 {\bf BlindCutCollection} and other specific cut implementations.
70
71 \subsection{Collections and memory management}
72
73 Regarding to cuts ownership, a collection will by default own its cuts. This
74 means that when you add a cut to a collection calling {\bf AddCut} with a
75 pointer to the cut as argument, the collection will store this pointer and
76 will take charge of deleting it. This is done by the collection's destructor.
77 This way, a user can simply add a cut to a collection and forget about
78 deleting it. However, sometimes this feature is not desirable, so it can be
79 turned off by passing a proper value to the collection's constructor. In this
80 case, the user is in charge of managing the memory allocated for the cuts.
81
82
83 \section{Actions and the SmartCollection}
84 When performing an analysis, each time an event is selected as good some
85 actions are likely to be performed, like filling histograms or writing a
86 report. To automate these tasks, the class {\bf CollectionAction} has been
87 designed. A {\bf CollectionAction} object has a {\bf Setup} method which
88 may contain the initialization procedures, like reading parameters from a file.
89 Also the constructor can contain some initializations. The finalization, like
90 writing histograms on a file, is in the {\bf Finalize} method. The specific
91 actions for good and bad events are to be defined in the {\bf OnGood} and {\bf
92 OnBad} methods, much like in collections. {\bf CollecionAction} is an abstract
93 class, which does nothing but defining the interface. Its concrete
94 implementations will be called actions.\\
95 Actions are automatically handled by the {\bf SmartCollection} class. It
96 inherits from {PamCutCollection}, and contains a vector of {\bf
97 CollectionAction} objects. These actions can be added calling {\bf
98 SmartCollection::AddAction}; for each of them, the collection will take care of
99 calling {\bf Setup} at the beginning of the analysis, {\bf OnGood} and {\bf
100 OnBad} for every event (depending on the selection result), and {\bf Finalize}
101 at the end of the analysis. In all other aspects, it behaves exactly as {\bf
102 PamCutCollection}. The handling of the actions depend on their position with
103 respect to cuts. In the main analysis code, cuts can be added to the
104 collection; then, an action or more can be added. This sequence of cuts and the
105 actions added after the cuts will be called a bunch. The last bunch may have no
106 actions at its end.
107 \begin{verbatim}
108 .
109 .
110 .
111 Cut1 cut1(``cut1'');
112 Cut2 cut2(``cut2'');
113 Action1 action1(``action1'');
114
115 Cut1 cut3(``cut3'');
116 Cut2 cut4(``cut4'');
117 Cut1 cut5(``cut5'');
118 Cut2 cut6(``cut6'');
119 Action1 action2(``action2'');
120 Action1 action3(``action3'');
121 .
122 .
123 .
124 \end{verbatim}
125 In the example above, {\bf cut1}, {\bf cut2} and {\bf action1} are the first
126 bunch, {\bf cut3}, {\bf cut4}, {\bf cut5}, {\bf cut6}, {\bf action2} and {\bf
127 action3} are the second bunch and so on. If all the cuts in the bunch are
128 successful, the {\bf SmartCollection} will call {\bf OnGood} for every action
129 in that bunch, and then the analysis will go on with the next bunch for the same
130 event. If a certain cut in a bunch fails, then {\bf OnBad} is called for the
131 actions of the bunch, but successive bunches are ignored; the current event is
132 discarded and the focus switches on the next one (the {\bf
133 SmartBlindCollection} behaves a little differently; see the Doxygen
134 documentation for more details.)
135 \\
136 Loosely speaking, after defining an action one simply has to instantiate it,
137 add it to a {\bf SmartCollection} and launch the analysis.
138
139 \subsection{SmartCollections and memory management}
140 Like the standard collection, SmartCollection can handle the memory management
141 for actions. This works exactly as for cuts. Cuts and actions have to be
142 managed uniformly, ie., one cannot turn off ownership only for cuts or only for
143 actions.
144
145 \section{The software organization}
146 The software is organized in a tree of directories. The idea is that each node
147 of this tree must provide the necessary informations about the sub-branches. In
148 each directory, a header file will contain \verb1#include1 directives for all
149 header files in the sub-directories. This way, it is sufficient to include the
150 top-level header in the analysis software to automatically include all the
151 headers of the project. This top-level header is {\it PamCutEnv.h}. Each time a
152 sub-directory is created, the headers it contains must be included in the
153 parent node's header.
154
155 \subsection{The makefile organization}
156 The instructions to build the software library are encoded in the standard {\bf
157 make} way, following the same tree structure of the software. Each node
158 directory contains an {\it include.mk} file, which includes all the
159 {include.mk} in the sub-directories. This chain of inclusions terminates on the
160 leaves, which most of the times contain the definition and implementation of
161 the classes. Each leaf directory must provide build instructions for the files
162 it contains, in a file called {\it subdir.mk}; this file has to be included in
163 the upper {\it include.mk}. The top level {\it include.mk} is included by
164 the {\it makefile}.\\
165 This way, each directory takes care of its sub-directories, letting a quick and
166 easy addition of the latters.
167
168 \section{The software architecture}
169 The software is organized in a tree of directories. The root folder contains a
170 {\it PamCutBase} folder, in which the definitions of the base classes ({\bf
171 PamCut} and {\bf PamCutCollection}) are stored, and the general headers {\it
172 PamCutEnv.h} and {\it CommonDefs.h}. {\it PamCutEnv.h} contains the inclusion
173 of all the headers of the framework, allowing for the use of the entire
174 software with only one \verb1#include1 in the code; {\it CommonDefs.h} contains
175 all the definitions which are relevant for the entire framework.
176 To develop a specific cut one has to define a derived class from {\bf PamCut},
177 which at least must provide a concrete implementation of {\bf Check}. One can
178 also define new versions of {\bf OnGood} and {\bf OnBad} for specific
179 post-selection tasks (in the base class these methods do nothing). Be very
180 careful if you decide to redefine {\bf ApplyCut}: remeber that the interface
181 requires them to call the post-selection routines.
182 A good rule for keeping the software distribution clean is to use a different
183 folder for each cut definition, named as the cut itself. When you define a new
184 cut, create a folder named as the cut inside an adequate parent directory and
185 place the {\it .h} and {\it .cpp} files inside it. To speed up development, you may copy
186 and paste an existing cut and then modify it.\\
187 In the same way, actions can be defined.
188
189 \section{How to define a cut}
190 As said above, to define a cut (let's name it {\bf DummyCut}) it is suggested
191 to create a folder {\it DummyCut} inside, eg., a {\bf DummyDir} directory
192 inside the root folder, and to create inside it two files: {\it DummyCut.h},
193 which will contain all the declarations, and {\it DummyCut.cpp}, where the
194 implementation will be coded. A typical structure for {\it DummyCut.h} would be:
195
196 \begin{verbatim}
197 #ifndef DUMMYCUT_H_
198 #define DUMMYCUT_H_
199
200 /*! @file DummyCut.h The DummyCut class declaration file */
201
202 #include "../../PamCutBase/PamCutBase.h"
203
204 /*! @brief An example cut. */
205 class DummyCut: public PamCut {
206 public:
207
208 /*! @brief Constructor. */
209 Dummy(char *cutName):
210 PamCut(cutName) {
211 }
212
213 /*! @brief Destructor. */
214 ~DummyCut() {
215 }
216
217 /*! @brief The dummy cut check
218 *
219 * This routine checks the event using the dummy cut criteria.
220 *
221 * @param event The event to analyze.
222 * @return CUTOK if the dummy cut is satisfied
223 */
224 int Check(PamLevel2 *event);
225
226 };
227
228 #endif /* DUMMYCUT_H_ */
229
230 \end{verbatim}
231
232 Note the inclusion of {\it PamCutBase.h}: this is essential to let {\bf
233 DummyClass} know about its parent class, which in this case is {\bf PamCut}.
234 Another thing to care about is the multiple inclusion protection, ie., the
235 preprocessor directives:
236 \begin{verbatim}
237 #ifndef DUMMYCUT_H_
238 #define DUMMYCUT_H_
239 .
240 .
241 .
242 #endif /* DUMMYCUT_H_ */
243 \end{verbatim}
244 This provide protection against multiple inclusion of a header in a single
245 compilation unit due to circular inclusion; in practice, it will save the
246 developer a lot of double definition errors from the compiler. Be sure that all
247 your headers are encapsulated in such a structure, and that for each header
248 there is a different name after \verb1#ifndef1 (which must match the successive
249 \verb1#define1); otherwise, the compiler could skip the inclusion of some
250 headers. The convention adopted in {\bf PamCut} for such tags is:
251 \verb1<Filename in uppercase>_H_1, which provides univocal tags for each header.
252
253 The essential redeclarations are: the constructor, the desctructor and {\bf
254 Check}. Constructor deserves some observations: since constructors are not
255 inherited, each class must have its own. However, if a class has a parent
256 class, it is necessary to initialize also the parent class. If the parent class
257 has a default constructor (ie., a constructor without arguments) the compiler
258 will automatically call it and the user has nothing to worry about. However,
259 {\bf PamCut} has no default constructor, so in every derived class there must
260 be an explicit call to the {\bf PamCut} constructor. This is what is done in
261 the lines:
262
263 \begin{verbatim}
264 Dummy(char *cutName):
265 PamCut(cutName) {
266 }
267 \end{verbatim}
268 {\bf PamCut} objects have a name, internally stored in the {\bf \_cutName}; so
269 our {\bf DummyClass}, since it inherits from {\bf PamCut}, must also have a
270 name. Its constructor, indeed, requires a string containing the cut's name. But
271 in {\bf DummyCut} there's no local variable to store it, and there's no need to
272 define one since it is already present in {\bf PamCut}. The only thing to do is
273 to initialize it exactly as {\bf PamCut} would do: this is the purpose of
274 invoking \verb1PamCut(cutName)1. As an aside, what follows the colon after the
275 {\bf DummyCut} constructor declaration is called \emph{initialization list}:
276 here one can (and must) call the constructors for all the objects and variables
277 contained in the class. These are executed before the class constructor's body,
278 so that one can safely assume that at construction time all the internal
279 objects and variables are properly initialized.
280
281 To recap and generalize, when you write a derived class you must always call
282 the constructor of its parent class, which will take care about initializing
283 the ``parent core'' of the derived class.
284
285 One can also redefine {\bf OnGood}, {\bf OnBad}, add new methods, variables and
286 so on. If you plan to build a general purpose cut who could be used by
287 different people please take the time to add the appropriate documentation to
288 the code. In the example, Doxygen comment lines (beginning with \verb1/*!1)
289 are shown; an on-line guide for Doxygen can be found at:
290 \newline
291 \newline
292 \verb1 http://www.stack.nl/~dimitri/doxygen/manual.html1
293
294 \vspace{.5cm}
295 Once the header has been prepared, it's time to implement the cut in {\it
296 DummyCut.cpp}:
297
298 \begin{verbatim}
299 /*! @file DummyCut.cpp The DummyCut class implementation file */
300
301 #include "DummyCut.h"
302
303 int DummyCut::Check(PamLevel2 * event) {
304
305 if (<Some condition about the event>)
306 return CUTOK;
307 else
308 return 0;
309 }
310 \end{verbatim}
311
312 In this very simple implementation a basic feature is noticeable: the interface
313 requires that whenever an event satisfy the selection criterion, {\bf Check}
314 must return the {\bf CUTOK} value. This value is defined in {\it CommonDefs.h}.
315 The return value for a discarded event is implementation-specific, and can be
316 anything but {\bf CUTOK}. This return value could take different values
317 depending on the reason why the event has been discarded: for example, it can
318 be a number associated to a specific detector whose data is missing in a data
319 quality cut. It is then passed to {\bf OnBad}, which can perform some task
320 depending on the specific reason of the cut failure (eg., on which detector has
321 no data).
322 Remember also to include the header file.
323
324 \section{How to setup the cut's build} \label{sec:build}
325 Once a cut has been declared and implemented, the makefiles have to be adjusted
326 to include it in the cut library. The {\it makefile} provided with the software
327 will build a library called {\it libPamCut.so}, which can be then linked to a
328 specific analysis code. It is based on a submakefile structure. Each folder
329 containing a cut must also include one of these submakefiles (named {\it
330 subdir.mk} by convention), who instructs the main {\it makefile} on how to build
331 the newly added cut. An example is:
332
333 \begin{verbatim}
334 # Choose a name for the object file. This must coincide with the
335 # .cpp and .h filename, except for the extension which has to
336 # be .o
337 OBJS += ./DummyDir/DummyCut/DummyCut.o
338
339 # Dependencies file. The extension must be .d, and the name equal
340 # to the cut name.
341 CPP_DEPS += ./DummyDir/DummyCut/DummyCut.d
342
343 # Rules for compilation. You will likely have only to specify
344 # the path. Put the directory path:
345 # here here
346 ./DummyDir/DummyCut/%.o: ./DummyDir/DummyCut/%.cpp
347 @echo 'Building file: $<'
348 @echo 'Invoking: GCC C++ Compiler'
349 $(C++) -I${ROOTSYS}/include -I${PAM_INC} -I${PAM_INC}/yoda \
350 -Wall -c -MMD -MP -MF"$(@:%.o=%.d)" -MT"$(@:%.o=%.d)"\
351 -o"$@" "$<"
352 @echo 'Finished building: $<'
353 @echo ' '
354
355 \end{verbatim}
356
357 Existing files can be used as a template. The first thing you have to modify
358 is the object name (\verb1OBJS1 variable): be careful to append the object name
359 to \verb1OBJS1 using \verb1+=1 instead of overwriting it with \verb1=1. The
360 paths in this file will be relative to the root directory where the {\it
361 makefile} is, eg., \verb1./1 is not the {\it DummyCut} directory where {\it
362 subdir.mk} is, but the root directory which contains the {\it makefile}. The
363 object file ({\it DummyCut.o}) must be named as the cut and the directory that
364 contains the cut. The \verb1CPP_DEPS1 variable must be similarly modified.
365 This variable contains a list of dependency files, which contains all the
366 external headers (eg., PAMELA and ROOT headers) {\bf PamCut} depends on. These
367 lists are automatically generated, and allows {\it make} to rebuild {\bf
368 PamCut} whenever one of these headers are modified. Finally, one has also to
369 put the directory name in the target line which precedes the build command,
370 just below \verb1here1.
371
372 After creating the {\it subdir.mk}, it must be included in the {\it include.mk}
373 in the parent directory. It looks like this:
374
375 \begin{verbatim}
376 # Include here the submakefiles for each cut
377
378 # Cuts
379 -include DummyDir/DummyCut/subdir.mk
380 .
381 .
382 .
383 \end{verbatim}
384
385
386 Remember to write paths as relative to the root directory. Going backward along
387 the chain of inclusions leads to the {\it makefile}:
388
389 \begin{verbatim}
390 # ------------------------ Build options ----------------------#
391
392 # C++ compiler and linker
393 C++ = g++
394
395 # Optimization flags.
396 OPTIMIZE = -g3 #-DDEBUGPAMCUT
397
398 # Library flags
399 EXCLUSIONFLAGS = #-DNO_TOFNUCLEI -DNO_CALONUCLEI -DNO_TRKNUCLEI
400
401 COMMONDEPS = makefile
402
403 -include include.mk
404
405 #------------------------ Make body ---------------------------#
406 #
407 # Below the make commands are defined. There are no options to
408 # set in this section, so it has to be modified only in case of
409 # radical changes to the make procedure.
410
411 # Remove command for clean
412 RM := rm -rf
413
414 # Additional dependencies from headers of other software
415 # (PAMELA, ROOT)
416 ifneq ($(MAKECMDGOALS),clean)
417 ifneq ($(strip $(CPP_DEPS)),)
418 -include $(CPP_DEPS)
419 endif
420 endif
421
422 # All Target
423 all: version libPamCut.so
424
425 # Tool invocations
426 libPamCut.so: $(OBJS) $(USER_OBJS)
427 @echo 'Building target: $@'
428 @echo 'Invoking: GCC C++ Linker'
429 $(C++) -shared -o"libPamCut.so" $(OBJS)
430 @echo 'Finished building target: $@'
431 @echo ' '
432
433 # Other Targets
434 clean:
435 -$(RM) $(CPP_DEPS) $(OBJS) libPamCut.so
436 -@echo ' '
437
438 version:
439 @gcc --version | grep gcc; echo
440
441 .PHONY: all clean dependents version
442 \end{verbatim}
443
444 The Build options section is the one that will most likely need a modification
445 if one wants to tweak the compilation; the make body will most of the times be
446 good as it is. The {\it makefile} contains comments to explain how to set the
447 various options.\\
448 The first option to set is the compiler: g++ will work on
449 almost all Linux systems. Next is the optimization level, which could be set to
450 one of the proposed values according to the specific necessities. The {\it
451 makefile} comments contains also instructions on how to enable debug sections
452 in the code.\\
453 Since the PAMELA software is modular, in some setups it may lack some libraries
454 needed by some cuts. When designing such a cut it is a good habit to setup
455 its eventual exclusion from the library. This can be efficiently done using
456 the compiler directive \verb1#ifdef1. For example, encapsulating all the code
457 in the header and implementation files of {\bf DummyCut} in a structure like
458 this:
459
460 \begin{verbatim}
461 #ifdef NO_DUMMYCUT
462 .
463 .
464 .
465 #endif
466 \end{verbatim}
467
468 \noindent will completely exclude {\bf DummyCut} from the environment if the
469 flag \verb1NO_DUMMYCUT1 is defined. It can be done passing the parameter
470 \verb1-DNO_DUMMYCUT1 to the compiler. These exclusion flags can be defined in
471 the \verb1EXCLUSIONFLAGS1 variable. A concrete example is given by
472 {\bf TofNucleiZCut}, which requires the {\bf ToFNuclei} library: look at it to
473 see how its exclusion from the code is implemented, allowing to build and use
474 {\bf PamCut} in those environments that do not include {\bf ToFNuclei}. In the
475 end, this is very similar to what is done with the debug sections.\\
476 The \verb1COMMONDEPS1 flag contains the files which, if modified, will trigger
477 a complete rebuild of the library. For example, if you change the {\it
478 makefile} by modifying an optimization option all the modules should be rebuilt
479 so that the whole library will have the same level of optimization. That's why
480 {\it makefile} is in \verb1COMMONDEPS1. Add all the other files that should
481 behave like this.\\
482
483 If you need some extra modifications to the building system you need to know
484 more about {\it make}; an online guide is at:
485 \newline
486 \verb1http://www.linuxtopia.org/online_books/programming_tool_guides/1
487 \newline
488 \verb1 gnu_make_user_guide/make.html#SEC_Top1
489
490
491 \section{How to define an action}
492 Defining an action is very similar to defining a cut. An example header:
493
494 \begin{verbatim}
495 #ifndef DUMMYACTION_H_
496 #define DUMMYACTION_H_
497
498 #include "../CollectionAction/CollectionAction.h"
499
500 /*! @brief A dummy action definition. */
501 class DummyAction: public CollectionAction {
502
503 public:
504 /*! @brief Constructor.
505 *
506 * @param actionName The action's name.
507 */
508 DummyAction(const char *actionName):
509 CollectionAction(actionName){}
510
511 /*! @brief Destructor */
512 ~DummyAction() {
513 }
514
515 /*! @brief The setup procedure.
516 *
517 * @param events The events pointer.
518 */
519 void Setup(PamLevel2 *events);
520
521 /*! @brief The OnGood procedure.
522 *
523 * @param event The selected event.
524 */
525 void OnGood(PamLevel2 *event);
526
527 /*! @brief Writes the tree of saved events to the output file. */
528 void Finalize();
529 };
530
531 #endif /* DUMMYACTION_H_ */
532
533 \end{verbatim}
534
535 The {\bf DummyAction} declaration above is a good example. The action classes
536 must inherit from {\bf CollectionAction}, and their constructor have to call
537 the constructor of the ancestor class in the initialization list, similarly to
538 what happens for cuts. Then some of the base class' methods are overridden,
539 specifically {\bf Setup}, {\bf OnGood} and {\bf Finalize}. The last two methods
540 have to be overridden, since they are pure virtual (an action is supposed to do
541 something for good events and something to end the analysis, so making these
542 methods pure virtual is a way to enforce a definition in the derived classes).
543 Conversely, {\bf Setup} and {\bf OnBad} are concrete in the base class, with
544 void implementation: since not all actions would need a setup (it can be done
545 also in the constructor) or a procedure for bad events, this implementation
546 allows to not define them in derived classes. Obviously, the re-declared
547 methods in the header have to have a re-definition in a .cpp file, exactly as
548 for the cuts.
549
550 \section{How to setup the actions's build}
551 This topic is very similar to that explained in sec. \ref{sec:build}, so it
552 should be straightforward. However, look at the provided concrete
553 implementations of actions if you need some example to setup your build.
554
555 \section{How to build and use the library}
556 \subsection{Standard Pamela environment}
557 If the makefiles are correctly set up, the only remaining thing is to type
558 \verb1make all1. Remember to set the PAMELA environment with the set\_pam\_env
559 script BEFORE invoking \verb1make1. This will generate a {\it libPamCut.so} file
560 which will contain all the cuts. To clean the project and build from scratch
561 type \verb1make clean all1. The software can then be installed in the usual
562 Pamela environment calling \verb1make install1: this will place all the
563 headers in the folder \verb1$PAM_INC/PamCut1 and the {\it libPamCut.so} file in
564 \verb1$PAM_LIB1. To eliminate the installed files call \verb1make distclean1;
565 note that this will NOT do the work of \verb1make clean1, eg., clean the
566 project, but simply remove the files copied in the Pamela directories. Remember
567 to type \verb1make install1 each time you modify and recompile the software, to
568 upgrade the installed version to the new one.
569
570 To use the library in an analysis code the
571 environment header must be included in the code:
572 \verb1#include "<root PamCutdirectory>/PamCutEnv.h"1. With this, all the
573 classes and common definitions will be accessible. A typical usage of {\bf
574 PamCut} inside the analysis code would look like:
575
576 \begin{verbatim}
577
578 #include <PamCut/PamCutEnv.h>
579
580 int main(){
581 .
582 .
583 .
584
585 PamCutCollection collection("Collection");
586
587 DummyCut1 *dummy1 = new DummyCut1("name1");
588 collection.AddCut(dummy1);
589 // The two lines above can be summarized also as:
590 // collection.AddCut(new DummyCut1("name1"));
591
592 DummyCut2 *dummy2 = new DummyCut2("name2", <eventual params>);
593 collection.AddCut(dummy2);
594
595 collection.Process(event, 0, event->GetEntries()-1);
596
597 .
598 .
599 .
600 }
601
602 \end{verbatim}
603
604 In the simple example above a \verb1DummyCut11 and a \verb1DummyCut21 object
605 (which requires some sort of parameter) are instantiated. They are added to
606 \verb1collection1 which takes care of applying them to all the events.
607
608 \subsection{Custom environment}
609 If you don't have access to the Pamela software directories (eg., you don't
610 have write permission on them) you cannot install the software; but you can
611 still use PamCut directly from the source folder.
612
613 First of all, you have to tell the compiler where to find the {\bf PamCut}'s
614 headers. They are in the main {\bf PamCut} directory, so you may add this
615 option:
616 \newline
617 \verb1 -I<directory>1
618 \newline
619 to the compiler invocation in the {\it makefile} of your main analysis program.
620 This tells the compiler to search for headers in the folder specified after
621 \verb1-I1. So, if {\it <directory>} is the folder which contains the {\bf
622 PamCut}'s main folder, you don't have to change anything in your main analysis
623 file (with respect to what described in the previous subsection), since:
624 \newline
625 \verb1 #include <PamCut/PamCutEnv.h>1
626 \newline
627 includes the file {\it PamCutEnv.h} which is in the folder {\it PamCut} in the
628 standard inclusion directories, one of which is the one specified with the
629 \verb1-I1 compiler directive. Obviously, one can play with directories, having
630 care to indicate the right paths to the compiler
631
632 The following option must be added to the linker invocation:
633 \newline
634 \verb1 -L<root PamCut directory> -lPamCut1.
635 \newline
636 to tell the linker where the dynamic library is.
637
638 Then, when the analysis code is compiled and linked against libPamCut.so, to
639 launch it it's necessary to tell the environment where the library is, so that
640 the program can dynamically access it at runtime. This information is encoded
641 in the environment variable LD\_LIBRARY\_PATH, which contains the paths of the
642 accessible libraries. If libPamCut.so is still in the root PamCut directory one
643 can type:
644 \newline
645 \verb1export LD_LIBRARY_PATH=<root PamCut directory>:$LD_LIBRARY_PATH1
646 \newline
647 This has to be done every time you open a shell; one way to avoid this is to
648 append the above line at the end of your set\_pam\_env script, so that it will
649 be automatically executed every time you set your PAMELA environment.
650
651 \section{Usage summary}
652 Here's a short summary on how to develop a cut, build and use it.
653 \begin{enumerate}
654 \item Obtain the code (from tarball, repository\ldots) and go in the root
655 code directory.
656 \item Check if the \verb1C++1 option in the Build section of {\it makefile}
657 is correctly set with the C++ compiler name present in your system (for many
658 Linux platforms, \verb1g++1 is a safe choice).
659 \item Create a directory named as the cut class you want to develop.
660 \item Place inside the newly created directory a {\it .h} file and a {\it
661 .cpp} file, named as the direcory; edit the files to define and implement the
662 class (one can also cut and paste the files from an existing class and edit
663 them), defining at least the constructor, the distructor and {\bf Check} for
664 the new class.
665 \item Create inside the directory a {\it subdir.mk} file which contains the
666 instructions to build the directory content, as described in \ref{sec:build};
667 as usual, one can cut and paste from an existing class and then edit.
668 \item Modify the {\it makefile} in the root code directory as in
669 \ref{sec:build}, to include the newly developed cut.
670 \item Modify the {\it PamCutEnv.h} file, adding the \verb1#include#1 for the
671 new class header (see examples therein).
672 \item Set the PAMELA environment with the set\_pam\_env script.
673 \item Build the library typing \verb1make all1 (or \verb1make clean all1 to
674 build from scratch); this will produce the library {\it libPamCut.so} in the
675 root code directory, which will contain all the class definitions and
676 implementations.
677 \item Insert \verb1#include ``<root PamCut directory>/PamCutEnv.h#1 in the
678 analysis code, to have access to all the classes in the library.
679 \item Develop the analysis code
680 \end{enumerate}
681
682 \section{Online documentation}
683 The code is provided with a full set of Doxygen tags; documentation can then be
684 built using the doxygen documentation system. However, an online documentation
685 is availbale at:
686 \newline
687 \newline
688 \verb1 http://hep.fi.infn.it/PAMELA/documents/PamCut_documentation/1
689
690 \vspace{.5cm}
691 It is updated frequently, but it may lack the description of the most recent
692 features. Please notify any fault in the online documentation for a quick fix.
693
694
695 \section{Some advices and suggestions}
696 \begin{itemize}
697 \item Derive your cuts. Try to define a new class every time you need a new
698 cut, instead of modifying an existing one. As an example, you can define a
699 cut with a specific implementation for {\bf Check}, then derive from it many
700 classes which only redefine {\bf OnGood} and {\bf OnBad}. In this way, you
701 can have several post-selection options associated to the same cut; if
702 you'll ever need to modify the cut criteria, you will have to do it only in
703 one place, saving time and reducing code errors opportunities.
704 \item Be consistent with the existing code style. Everyone has its own
705 code style, with its own conventions for naming variables and functions based
706 on personal preferences. Maintaining a uniform code style is a good way to
707 improve the code readability, so it's worth a little effort. The conventions
708 chosen for {\bf PamCut} are:
709 \begin{itemize}
710 \item the names of private and protected members (variables and methods)
711 always begin with an underscore, eg., \verb1_previousOBT1, \verb1_LT1;
712 \item the names of variables usually begin with a lower case; for compound
713 words, the successive initials are upper case, eg., \verb1time1,
714 \verb1liveTime1, \verb1nHitPaddles1;
715 \item the names of classes and methods begin with upper case, eg.,
716 \verb1PamCut1, \verb1GeoFieldCut1, \verb1ApplyCut()1.
717 \end{itemize}
718 Within these conventions, a code row like:
719 \newline
720 \verb1 GeoFieldCut geoFieldCut;1
721 \newline
722 is easily interpreted as the instantiation of a class named
723 \verb1GeoFieldCut1 into an object named \verb1geoFieldCut1. This allows to
724 have objects whose names are almost identical to those of the respective
725 classes, allowing a straightforward type recognition. Also, the discrimination between
726 public and private variables and methods inside a class is immediate.
727 \item Respect the interface. {\bf PamCut} has been designed following precise
728 rules, which allow for a quite general environment that should cover many
729 of the necessities related to data analysis. Try to fit your
730 particular analysis in this scheme; this will result in a much more
731 raedable code. However, someone may need features that are not compatible with
732 the current interface. In this case, the first thing to do is to try a
733 workaround that would leave the interface unchanged. As an example, if the
734 automated post-selection tasks based on {\bf OnGood} and {\bf OnBad} does not
735 satisfy you, you can call directly the {\bf Check} method inside a loop and
736 then do a custom post-processing. In the worst case, the interface could
737 result incompatible with the analysis needs: in this case a redesign
738 hypotheses can be considered if the incompatibility is such that a large
739 piece of the analysis would be compromised. Redesigning is always a tricky
740 task, so it has to be considered as a last option.
741 \item Take care about other people. If you plan to write a code that is
742 likely to be used and modified by other people, please take your time to
743 write the documentation. Documenting the code is a boring and time-consuming
744 task, but can save you and your colleagues a lot of headaches and
745 misunderstandings. The better a code is documented, the lesser the
746 questions other people will ask you.
747 \end{itemize}
748
749
750 \end{document}

  ViewVC Help
Powered by ViewVC 1.1.23