/[PAMELA software]/PamCut/doc/PamCutDevGuide.tex
ViewVC logotype

Contents of /PamCut/doc/PamCutDevGuide.tex

Parent Directory Parent Directory | Revision Log Revision Log


Revision 1.2 - (show annotations) (download) (as text)
Fri Jun 5 13:33:18 2009 UTC (15 years, 6 months ago) by pam-fi
Branch: MAIN
Changes since 1.1: +35 -3 lines
File MIME type: application/x-tex
Updated with recent changes to SmartCollection.

1 \documentclass{article}
2
3 \title{PamCut Developer's Guide}
4 \author{Nicola Mori}
5
6 \begin{document}
7 \maketitle
8 \tableofcontents
9 \section{The philosophy}
10
11 {\bf PamCut} is an abstract class, defining the interface for a cut object used
12 in the analysis of PAMELA data. The main idea beyond the development is that a
13 cut object must be capable of saying if an event is good or not, ie., if it
14 satisfies some selection criteria. This criteria have to be implemented from
15 scratch every time a specific cut is needed, by defining a concrete class which
16 inherits form {\bf PamCut} and provides an implementation for the pure virtual
17 method {\bf Check}.
18 A special derived class is {\bf PamCutCollection}. This is a sort of a
19 container for basic cuts, and is considered as a single cut. Its {\bf Check}
20 implementation simply invokes all the {\bf Check}s of the single cuts, and it
21 is successful if all the single cuts are succesful.
22 {\bf PamCut} also provides the interface for two methods for post-processing:
23 {\bf OnGood} and {\bf OnBad}. In derived classes these can contain specific
24 tasks to be performed whenever an event satisfy the {\bf Check} condition or
25 not. The method {\bf ApplyCut} takes care of invoking {\bf Check} and
26 subsequently calls {\bf OnGood} or {\bf OnBad} according to the result of {\bf Check}.
27 Summarizing, {\bf Check}ing an event simply means to ask the object if the
28 event satisfy the selection criteria; applying a cut means to check and then
29 perform post-selection tasks. The cut can be applied to a bunch of events by
30 means of the {\bf Process} method.
31
32 \subsection{More on collections}
33 A collection is an object which inherits from {\bf PamCutCollection}, which in
34 turn inherits from {\bf PamCut}. So a collection is a cut itself, meaning that
35 its interface contains a {\bf Check} method, an {\bf ApplyCut} method and so
36 on. Logically, this is in agreement with the fact that a bunch of cuts can be
37 thought of as a single cut whose result is the logical AND of all the basic
38 cuts. More specifically, the implementation chosen for {\bf PamCutCollection}
39 methods consists in simply performing a cyclic call of the corresponding
40 methods in basic cuts. So {\bf PamCutCollection::Check} will subsequently call
41 all the {\bf Check} methods of the basic cuts it contains, and {\bf
42 PamCutCollection::ApplyCut} will call the basic {\bf ApplyCut}s. This last
43 feature deserves some more words. When the collection calls a basic {\bf
44 ApplyCut}, the basic cut will also perform its specific post-selection tasks as
45 defined in its implementations of {\bf OnGood} and {\bf OnBad}. If all the
46 basic cuts result to be satisfied, then {\bf PamCutCollection::ApplyCut} will
47 call {\bf PamCutCollection::OnGood}, allowing for the execution of a
48 post-processing task which occurs only if all the basic selection criteria are
49 satisfied. Indeed, as said above a collection is a cut, so it behaves exactly
50 like a cut: when you apply it, you will also do the apprpriate post-processing.
51 We have then two levels of post-processing: the first is triggered by success
52 of a basic cut, the second by the success of the whole sequence of basic cuts.
53 This modular behaviour achieved with collections allow for a definition of a
54 hierarchy of cuts: a collection can contain other collections, which in turn
55 can contain other collections or basic cuts, in a tree-like hierarchy. Each
56 level has its own post-selection tasks, allowing for a fine control of the
57 post-processing procedure.
58
59 To perform some tests it could be useful to have a collection which applies
60 the whole bunch cuts to all the events, regardless if some cuts are not
61 satisfied for a specific event ({\bf PamCutCollection} stop the evaluation for
62 the current event as soon as a cut is not satisfied). This is achieved with the
63 {\bf BlindCutCollection} class, which blindly checks all the cuts for all the
64 events. This will lead to a call to {\bf OnGood} or {\bf OnBad} for all the
65 cuts for each event; the collection will the call its own {\bf OnGood} or {\bf
66 OnBad} if all the cuts have been satisfied or not, much like in
67 {\bf PamCutCollection}. See the Doxygen html documentation for more info about
68 {\bf BlindCutCollection} and other specific cut implementations.
69
70 \section{Actions and the SmartCollection}
71 When performing an analysis, each time an event is selected as good some
72 actions are likely to be performed, like filling histograms or writing a
73 report. To automate these tasks, the class {\bf CollectionAction} has been
74 designed. A {\bf CollectionAction} object has a {\bf Setup} method which
75 may contain the initialization procedures, like reading parameters from a file.
76 Also the constructor can contain some initializations. The finalization, like
77 writing histograms on a file, is in the {\bf Finalize} method. The specific
78 actions for good and bad events are to be defined in the {\bf OnGood} and {\bf
79 OnBad} methods, much like in collections. {\bf CollecionAction} is an abstract
80 class, which does nothing but defining the interface. Its concrete
81 implementations will be called actions.\\
82 Actions are automatically handled by the {\bf SmartCollection} class. It
83 inherits from {PamCutCollection}, and contains a vector of {\bf
84 CollectionAction} objects. These actions can be added using {\bf
85 SmartCollection::AddAction}; for each of them, the collection will take care of
86 calling {\bf Setup} at the beginning of the analysis, {\bf OnGood} and {\bf
87 OnBad} for every event (depending on the selection result), and {\bf Finalize}
88 at the end of the analysis. In all other aspects, it behaves exactly as {\bf
89 PamCutCollection}. The handling of the actions depend on their position with
90 respect to cuts. In the main analysis code, cuts can be added to the
91 collection; then, an action or more can be added. This sequence of cuts and the
92 actions added after the cuts will be called a bunch. The last bunch may have no
93 actions at its end.
94 \begin{verbatim}
95 .
96 .
97 .
98 Cut1 cut1(``cut1'');
99 Cut2 cut2(``cut2'');
100 Action1 action1(``action1'');
101
102 Cut1 cut3(``cut3'');
103 Cut2 cut4(``cut4'');
104 Cut1 cut5(``cut5'');
105 Cut2 cut6(``cut6'');
106 Action1 action2(``action2'');
107 Action1 action3(``action3'');
108 .
109 .
110 .
111 \end{verbatim}
112 In the example above, {\bf cut1}, {\bf cut2} and {\bf action1} are the first
113 bunch, {\bf cut3}, {\bf cut4}, {\bf cut5}, {\bf cut6}, {\bf action2} and {\bf
114 action3} are the second bunch and so on. If all the cuts in the bunch are
115 successful, the {\bf SmartCollection} will call {\bf OnGood} for every action
116 in that bunch, and then the analysis will go on with the next bunch for the same
117 event. If a certain cut in a bunch fails, then {\bf OnBad} is called for the
118 actions of the bunch, but successive bunches are ignored; the current event is
119 discarded and the focus switches on the next one (the {\bf
120 SmartBlindCollection} behaves a little different; see the Doxygen
121 documentation for more details.)
122 \\
123 Loosely speaking, after defining an action one simply has to instantiate it,
124 add it to a {\bf SmartCollection} and launch the analysis.
125
126 \section{The software organization}
127 The software is organized in a tree of directories. The idea is that each node
128 of this tree must provide the necessary informations about the sub-branches. In
129 each directory, a header file will contain \verb1#include1 directives for all
130 header files in the sub-directories. This way, it is sufficient to include the
131 top-level header in the analysis software to automatically include all the
132 headers of the project. This top-level header is {\it PamCutEnv.h}. Each time a
133 sub-directory is created, the headers it contains must be included in the
134 parent node's header.
135
136 \subsection{The makefile organization}
137 The instructions to build the software library are encoded in the standard {\bf
138 make} way, following the same tree structure of the software. Each node
139 directory contains an {\it include.mk} file, which includes all the
140 {include.mk} in the sub-directories. This chain of inclusions terminates on the
141 leaves, which most of the times contain the definition and implementation of
142 the classes. Each leaf directory must provide build instructions for the files
143 it contains, in a file called {\it subdir.mk}; this file has to be included in
144 the upper {\it include.mk}. The top level {\it include.mk} is included by
145 the {\it makefile}.\\
146 This way, each directory takes care of its sub-directories, letting a quick and
147 easy addition of the latters.
148
149 \section{The software architecture}
150 The software is organized in a tree of directories. The root folder contains a
151 {\it PamCutBase} folder, in which the definitions of the base classes ({\bf
152 PamCut} and {\bf PamCutCollection}) are stored, and the general headers {\it
153 PamCutEnv.h} and {\it CommonDefs.h}. {\it PamCutEnv.h} contains the inclusion
154 of all the headers of the framework, allowing for the use of the entire
155 software with only one \verb1#include1 in the code; {\it CommonDefs.h} contains
156 all the definitions which are relevant for the entire framework.
157 To develop a specific cut one has to define a derived class from {\bf PamCut},
158 which at least must provide a concrete implementation of {\bf Check}. One can
159 also define new versions of {\bf OnGood} and {\bf OnBad} for specific
160 post-selection tasks (in the base class these methods do nothing). Be very
161 careful if you decide to redefine {\bf ApplyCut}: remeber that the interface
162 requires them to call the post-selection routines.
163 A good rule for keeping the software distribution clean is to use a different
164 folder for each cut definition, named as the cut itself. When you define a new
165 cut, create a folder named as the cut inside an adequate parent directory and
166 place the {\it .h} and {\it .cpp} files inside it. To speed up development, you may copy
167 and paste an existing cut and then modify it.\\
168 In the same way, actions can be defined.
169
170 \section{How to define a cut}
171 As said above, to define a cut (let's name it {\bf DummyCut}) it is suggested
172 to create a folder {\it DummyCut} inside, eg., a {\bf DummyDir} directory
173 inside the root folder, and to create inside it two files: {\it DummyCut.h},
174 which will contain all the declarations, and {\it DummyCut.cpp}, where the
175 implementation will be coded. A typical structure for {\it DummyCut.h} would be:
176
177 \begin{verbatim}
178 #ifndef DUMMYCUT_H_
179 #define DUMMYCUT_H_
180
181 /*! @file DummyCut.h The DummyCut class declaration file */
182
183 #include "../../PamCutBase/PamCutBase.h"
184
185 /*! @brief An example cut. */
186 class DummyCut: public PamCut {
187 public:
188
189 /*! @brief Constructor. */
190 Dummy(char *cutName):
191 PamCut(cutName) {
192 }
193
194 /*! @brief Destructor. */
195 ~DummyCut() {
196 }
197
198 /*! @brief The dummy cut check
199 *
200 * This routine checks the event using the dummy cut criteria.
201 *
202 * @param event The event to analyze.
203 * @return CUTOK if the dummy cut is satisfied
204 */
205 int Check(PamLevel2 *event);
206
207 };
208
209 #endif /* DUMMYCUT_H_ */
210
211 \end{verbatim}
212
213 Note the inclusion of {\it PamCutBase.h}: this is essential to let {\bf
214 DummyClass} know about its parent class, which in this case is {\bf PamCut}.
215 Another thing to care about is the multiple inclusion protection, ie., the
216 preprocessor directives:
217 \begin{verbatim}
218 #ifndef DUMMYCUT_H_
219 #define DUMMYCUT_H_
220 .
221 .
222 .
223 #endif /* DUMMYCUT_H_ */
224 \end{verbatim}
225 This provide protection against multiple inclusion of a header in a single
226 compilation unit due to circular inclusion; in practice, it will save the
227 developer a lot of double definition errors from the compiler. Be sure that all
228 your headers are encapsulated in such a structure, and that for each header
229 there is a different name after \verb1#ifndef1 (which must match the successive
230 \verb1#define1); otherwise, the compiler could skip the inclusion of some
231 headers. The convention adopted in {\bf PamCut} for such tags is:
232 \verb1<Filename in uppercase>_H_1, which provides univocal tags for each header.
233
234 The essential redeclarations are: the constructor, the desctructor and {\bf
235 Check}. Constructor deserves some observations: since constructors are not
236 inherited, each class must have its own. However, if a class has a parent
237 class, it is necessary to initialize also the parent class. If the parent class
238 has a default constructor (ie., a constructor without arguments) the compiler
239 will automatically call it and the user has nothing to worry about. However,
240 {\bf PamCut} has no default constructor, so in every derived class there must
241 be an explicit call to the {\bf PamCut} constructor. This is what is done in
242 the lines:
243
244 \begin{verbatim}
245 Dummy(char *cutName):
246 PamCut(cutName) {
247 }
248 \end{verbatim}
249 {\bf PamCut} objects have a name, internally stored in the {\bf \_cutName}; so
250 our {\bf DummyClass}, since it inherits from {\bf PamCut}, must also have a
251 name. Its constructor, indeed, requires a string containing the cut's name. But
252 in {\bf DummyCut} there's no local variable to store it, and there's no need to
253 define one since it is already present in {\bf PamCut}. The only thing to do is
254 to initialize it exactly as {\bf PamCut} would do: this is the purpose of
255 invoking \verb1PamCut(cutName)1. As an aside, what follows the colon after the
256 {\bf DummyCut} constructor declaration is called \emph{initialization list}:
257 here one can (and must) call the constructors for all the objects and variables
258 contained in the class. These are executed before the class constructor's body,
259 so that one can safely assume that at construction time all the internal
260 objects and variables are properly initialized.
261
262 To recap and generalize, when you write a derived class you must always call
263 the constructor of its parent class, which will take care about initializing
264 the ``parent core'' of the derived class.
265
266 One can also redefine {\bf OnGood}, {\bf OnBad}, add new methods, variables and
267 so on. If you plan to build a general purpose cut who could be used by
268 different people please take the time to add the appropriate documentation to
269 the code. In the example, Doxygen comment lines (beginning with \verb1/*!1)
270 are shown; an on-line guide for Doxygen can be found at:
271 \newline
272 \newline
273 \verb1 http://www.stack.nl/~dimitri/doxygen/manual.html1
274
275 \vspace{.5cm}
276 Once the header has been prepared, it's time to implement the cut into {\it
277 DummyCut.cpp}:
278
279 \begin{verbatim}
280 /*! @file DummyCut.cpp The DummyCut class implementation file */
281
282 #include "DummyCut.h"
283
284 int DummyCut::Check(PamLevel2 * event) {
285
286 if (<Some condition about the event>)
287 return CUTOK;
288 else
289 return 0;
290 }
291 \end{verbatim}
292
293 In this very simple implementation a basic feature is noticeable: the interface
294 requires that whenever an event satisfy the selection criterion, {\bf Check}
295 must return the {\bf CUTOK} value. This value is defined in {\it CommonDefs.h}.
296 The return value for a discarded event is implementation-specific, and can be
297 anything but {\bf CUTOK}. This return value could take different values
298 depending on the reason why the event has been discarded: for example, it can
299 be a number associated to a specific detector whose data is missing in a data
300 quality cut. It is then passed to {\bf OnBad}, which can perform some task
301 depending on the specific reason of the cut failure (eg., on which detector has
302 no data).
303 Remember also to include the header file.
304
305 \section{How to setup the cut's build} \label{sec:build}
306 Once a cut has been declared and implemented, the makefiles have to be adjusted
307 to include it in the cut library. The {\it makefile} provided with the software
308 will build a library called {\it libPamCut.so}, which can be then linked to a
309 specific analysis code. It is based on a submakefile structure. Each folder
310 containing a cut must also include one of these submakefiles (named {\it
311 subdir.mk} by convention), who instructs the main {\it makefile} on how to build
312 the newly added cut. An example is:
313
314 \begin{verbatim}
315 # Choose a name for the object file. This must coincide with the
316 # .cpp and .h filename, except for the extension which has to
317 # be .o
318 OBJS += ./DummyDir/DummyCut/DummyCut.o
319
320 # Dependencies file. The extension must be .d, and the name equal
321 # to the cut name.
322 CPP_DEPS += ./DummyDir/DummyCut/DummyCut.d
323
324 # Rules for compilation. You will likely have only to specify
325 # the path. Put the directory path:
326 # here here
327 ./DummyDir/DummyCut/%.o: ./DummyDir/DummyCut/%.cpp
328 @echo 'Building file: $<'
329 @echo 'Invoking: GCC C++ Compiler'
330 $(C++) -I${ROOTSYS}/include -I${PAM_INC} -I${PAM_INC}/yoda \
331 -Wall -c -MMD -MP -MF"$(@:%.o=%.d)" -MT"$(@:%.o=%.d)"\
332 -o"$@" "$<"
333 @echo 'Finished building: $<'
334 @echo ' '
335
336 \end{verbatim}
337
338 Existing files can be used as a template. The first thing you have to modify
339 is the object name (\verb1OBJS1 variable): be careful to append the object name
340 to \verb1OBJS1 using \verb1+=1 instead of overwriting it with \verb1=1. The
341 paths in this file will be relative to the root directory where the {\it
342 makefile} is, eg., \verb1./1 is not the {\it DummyCut} directory where {\it
343 subdir.mk} is, but the root directory which contains the {\it makefile}. The
344 object file ({\it DummyCut.o}) must be named as the cut and the directory that
345 contains the cut. The \verb1CPP_DEPS1 variable must be similarly modified.
346 This variable contains a list of dependency files, which contains all the
347 external headers (eg., PAMELA and ROOT headers) {\bf PamCut} depends on. These
348 lists are automatically generated, and allows {\it make} to rebuild {\bf
349 PamCut} whenever one of these headers are modified. Finally, one has also to
350 put the directory name in the target line which precedes the build command,
351 just below \verb1here1.
352
353 After creating the {\it subdir.mk}, it must be included in the {\it include.mk}
354 in the parent directory. It looks like this:
355
356 \begin{verbatim}
357 # Include here the submakefiles for each cut
358
359 # Cuts
360 -include DummyDir/DummyCut/subdir.mk
361 .
362 .
363 .
364 \end{verbatim}
365
366
367 Remember to write paths as relative to the root directory. Going backward along
368 the chain of inclusions leads to the {\it makefile}:
369
370 \begin{verbatim}
371 # ------------------------ Build options ----------------------#
372
373 # C++ compiler and linker
374 C++ = g++
375
376 # Optimization flags.
377 OPTIMIZE = -g3 #-DDEBUGPAMCUT
378
379 # Library flags
380 EXCLUSIONFLAGS = #-DNO_TOFNUCLEI -DNO_CALONUCLEI -DNO_TRKNUCLEI
381
382 COMMONDEPS = makefile
383
384 -include include.mk
385
386 #------------------------ Make body ---------------------------#
387 #
388 # Below the make commands are defined. There are no options to
389 # set in this section, so it has to be modified only in case of
390 # radical changes to the make procedure.
391
392 # Remove command for clean
393 RM := rm -rf
394
395 # Additional dependencies from headers of other software
396 # (PAMELA, ROOT)
397 ifneq ($(MAKECMDGOALS),clean)
398 ifneq ($(strip $(CPP_DEPS)),)
399 -include $(CPP_DEPS)
400 endif
401 endif
402
403 # All Target
404 all: version libPamCut.so
405
406 # Tool invocations
407 libPamCut.so: $(OBJS) $(USER_OBJS)
408 @echo 'Building target: $@'
409 @echo 'Invoking: GCC C++ Linker'
410 $(C++) -shared -o"libPamCut.so" $(OBJS)
411 @echo 'Finished building target: $@'
412 @echo ' '
413
414 # Other Targets
415 clean:
416 -$(RM) $(CPP_DEPS) $(OBJS) libPamCut.so
417 -@echo ' '
418
419 version:
420 @gcc --version | grep gcc; echo
421
422 .PHONY: all clean dependents version
423 \end{verbatim}
424
425 The Build options section is the one that will most likely need a modification
426 if one wants to tweak the compilation; the make body will most of the times be
427 good as it is. The {\it makefile} contains comments to explain how to set the
428 various options.\\
429 The first option to set is the compiler: g++ will work on
430 almost all Linux systems. Next is the optimization level, which could be set to
431 one of the proposed values according to the specific necessities. The {\it
432 makefile} comments contains also instructions on how to enable debug sections
433 in the code.\\
434 Since the PAMELA software is modular, in some setups it may lack some libraries
435 needed by some cuts. When designing such a cut it is a good habit to setup
436 its eventual exclusion from the library. This can be efficiently done using
437 the compiler directive \verb1#ifdef1. For example, encapsulating all the code
438 in the header and implementation files of {\bf DummyCut} in a structure like
439 this:
440
441 \begin{verbatim}
442 #ifdef NO_DUMMYCUT
443 .
444 .
445 .
446 #endif
447 \end{verbatim}
448
449 \noindent will completely exclude {\bf DummyCut} from the environment if the
450 flag \verb1NO_DUMMYCUT1 is defined. It can be done passing the parameter
451 \verb1-DNO_DUMMYCUT1 to the compiler. These exclusion flags can be defined in
452 the \verb1EXCLUSIONFLAGS1 variable. A concrete example is given by
453 {\bf TofNucleiZCut}, which requires the {\bf ToFNuclei} library: look at it to
454 see how its exclusion from the code is implemented, allowing to build and use
455 {\bf PamCut} in those environments that do not include {\bf ToFNuclei}. In the
456 end, this is very similar to what is done with the debug sections.\\
457 The \verb1COMMONDEPS1 flag contains the files which, if modified, will trigger
458 a complete rebuild of the library. For example, if you change the {\it
459 makefile} by modifying an optimization option all the modules should be rebuilt
460 so that the whole library will have the same level of optimization. That's why
461 {\it makefile} is in \verb1COMMONDEPS1. Add all the other files that should
462 behave like this.\\
463
464 If you need some extra modifications to the building system you need to know
465 more about {\it make}; an online guide is at:
466 \newline
467 \verb1http://www.linuxtopia.org/online_books/programming_tool_guides/1
468 \newline
469 \verb1 gnu_make_user_guide/make.html#SEC_Top1
470
471
472 \section{How to define an action}
473 Defining an action is very similar to defining a cut. An example header:
474
475 \begin{verbatim}
476 #ifndef DUMMYACTION_H_
477 #define DUMMYACTION_H_
478
479 #include "../CollectionAction/CollectionAction.h"
480
481 /*! @brief A dummy action definition. */
482 class DummyAction: public CollectionAction {
483
484 public:
485 /*! @brief Constructor.
486 *
487 * @param actionName The action's name.
488 */
489 DummyAction(const char *actionName):
490 CollectionAction(actionName){}
491
492 /*! @brief Destructor */
493 ~DummyAction() {
494 }
495
496 /*! @brief The setup procedure.
497 *
498 * @param events The events pointer.
499 */
500 void Setup(PamLevel2 *events);
501
502 /*! @brief The OnGood procedure.
503 *
504 * @param event The selected event.
505 */
506 void OnGood(PamLevel2 *event);
507
508 /*! @brief Writes the tree of saved events to the output file. */
509 void Finalize();
510 };
511
512 #endif /* DUMMYACTION_H_ */
513
514 \end{verbatim}
515
516 The {\bf DummyAction} declaration above is a good example. The action classes
517 must inherit from {\bf CollectionAction}, and their constructor have to call
518 the constructor of the ancestor class in the initialization list, similarly to
519 what happens for cuts. Then some of the base class' methods are overridden,
520 specifically {\bf Setup}, {\bf OnGood} and {\bf Finalize}. The last two methods
521 have to be overridden, since they are pure virtual (an action is supposed to do
522 something for good events and something to end the analysis, so making these
523 methods pure virtual is a way to enforce a definition in the derived classes).
524 Conversely, {\bf Setup} and {\bf OnBad} are concrete in the base class, with
525 void implementation: since not all actions would need a setup (it can be done
526 also in the constructor) or a procedure for bad events, this implementation
527 allows to not define them in derived classes. Obviously, the re-declared
528 methods in the header have to have a re-definition in a .cpp file, exactly as
529 for the cuts.
530
531 \section{How to setup the actions's build}
532 This topic is very similar to that explained in sec. \ref{sec:build}, so it
533 should be straightforward. However, look at the provided concrete
534 implementations of actions if you need some example to setup your build.
535
536 \section{How to build and use the library}
537 If the makefiles are correctly set up, the only remaining thing is to type
538 \verb1make all1. Remember to set the PAMELA environment with the set\_pam\_env
539 script BEFORE invoking \verb1make1. This will generate a {\it libPamCut.so} file
540 which will contain all the cuts. To clean the project and build from scratch
541 type \verb1make clean all1. To use the library in an analysis code the
542 environment header must be included in the code:
543 \verb1#include "<root PamCutdirectory>/PamCutEnv.h"1. With this, all the
544 classes and common definitions will be accessible. A typical usage of {\bf
545 PamCut} inside the analysis code would look like:
546
547 \begin{verbatim}
548
549 PamCutCollection collection;
550
551 DummyCut1 dummy1;
552 collection.AddCut(dummy1);
553 DummyCut2 dummy2(param);
554 collection.AddCut(dummy2);
555
556 collection.Process(event, 0, event->GetEntries()-1);
557
558 \end{verbatim}
559
560 In the simple example above a \verb1DummyCut11 and a \verb1DummyCut21 object
561 (which requires some sort of parameter) are instantiated. They are added to
562 \verb1collection1 which takes care of applying them to all the events.
563
564 When the analysis code is compiled the linker must be aware that it
565 needs a library called {\it libPamCut.so} and where to find it. In the {\it
566 makefile} which builds the analysis program the following option must be added
567 to the linker invocation:
568 \newline
569 \verb1-L<root PamCut directory> -lPamCut1.
570
571 One could also wish to move {\it libPamCut.so} to another directory: this path
572 must then replace what is indicated as \verb1<root PamCut directory>1 above.
573
574 Finally, when the analysis code is compiled and linked against libPamCut.so, to
575 launch it it's necessary to tell the environment where the library is, so that
576 the program can dynamically access it at runtime. This information is encoded
577 in the environment variable LD\_LIBRARY\_PATH, which contains the paths of the
578 accessible libraries. If libPamCut.so is still in the root PamCut directory one
579 can type:
580 \newline
581 \verb1export LD_LIBRARY_PATH=<root PamCut directory>:$LD_LIBRARY_PATH1
582 \newline
583 This has to be done every time you open a shell; one way to avoid this is to
584 append the above line at the end of your set\_pam\_env script, so that it will
585 be automatically executed every time you set your PAMELA environment.
586
587 \section{Usage summary}
588 Here's a short summary on how to develop a cut, build and use it. If one only
589 \begin{enumerate}
590 \item Obtain the code (from tarball, repository\ldots) and go in the root
591 code directory.
592 \item Check if the \verb1C++1 option in the Build section of {\it makefile}
593 is correctly set with the C++ compiler name present in your system (for many
594 Linux platform, \verb1g++1 is a safe choice).
595 \item Create a directory named as the cut class you want to develop.
596 \item Place inside the newly created directory a {\it .h} file and a {\it
597 .cpp} file, named as the direcory; edit the files to define and implement the
598 class (one can also cut and paste the files from an existing class and edit
599 them), defining at least the constructor, the distructor and {\bf Check} for
600 the new class.
601 \item Create inside the directory a {\it subdir.mk} file which contains the
602 instructions to build the directory content, as described in \ref{sec:build};
603 as usual, one can cut and paste from an existing class and then edit.
604 \item Modify the {\it makefile} in the root code directory as in
605 \ref{sec:build}, to include the newly developed cut.
606 \item Modify the {\it PamCutEnv.h} file, adding the \verb1#include#1 for the
607 new class header (see examples therein).
608 \item Set the PAMELA environment with the set\_pam\_env script.
609 \item Build the library typing \verb1make all1 (or \verb1make clean all1 to
610 build from scratch); this will produce the library {\it libPamCut.so} in the
611 root code directory, which will contain all the class definitions and
612 implementations.
613 \item Insert \verb1#include ``<root PamCut directory>/PamCutEnv.h#1 in the
614 analysis code, to have access to all the classes in the library.
615 \item Develop the analysis code
616 \end{enumerate}
617
618 \section{Some advices and suggestions}
619 \begin{itemize}
620 \item Derive your cuts. Try to define a new class every time you need a new
621 cut, instead of modifying an existing one. As an example, you can define a
622 cut with a specific implementation for {\bf Check}, then derive from it many
623 classes which only redefine {\bf OnGood} and {\bf OnBad}. In this way, you
624 can have several post-selection options associated to the same cut; if
625 you'll ever need to modify the cut criteria, you will have to do it only in
626 one place, saving time and reducing code errors opportunities.
627 \item Be consistent with the existing code style. Everyone has its own
628 code style, with its own conventions for naming variables and functions based
629 on personal preferences. Maintaining a uniform code style is a good way to
630 improve the code readability, so it's worth a little effort. The conventions
631 chosen for {\bf PamCut} are:
632 \begin{itemize}
633 \item the names of private and protected members (variables and methods)
634 always begin with an underscore, eg., \verb1_previousOBT1, \verb1_LT1;
635 \item the names of variables usually begin with a lower case; for compound
636 words, the successive initials are upper case, eg., \verb1time1,
637 \verb1liveTime1, \verb1nHitPaddles1;
638 \item the names of classes and methods begin with upper case, eg.,
639 \verb1PamCut1, \verb1GeoFieldCut1, \verb1ApplyCut()1.
640 \end{itemize}
641 Within these conventions, a code row like:
642 \newline
643 \verb1 GeoFieldCut geoFieldCut;1
644 \newline
645 is easily interpreted as the instantiation of a class named
646 \verb1GeoFieldCut1 into an object named \verb1geoFieldCut1. This allows to
647 have objects whose names are almost identical to those of the respective
648 classes, allowing a straightforward type recognition. Also, the discrimination between
649 public and private variables and methods inside a class is immediate.
650 \item Respect the interface. {\bf PamCut} has been designed following precise
651 rules, which allow for a quite general environment that should cover many
652 of the necessities related to data analysis. Try to fit your
653 particular analysis in this scheme; this will result in a much more
654 raedable code. However, someone may need features that are not compatible with
655 the current interface. In this case, the first thing to do is to try a
656 workaround that would leave the interface unchanged. As an example, if the
657 automated post-selection tasks based on {\bf OnGood} and {\bf OnBad} does not
658 satisfy you, you can call directly the {\bf Check} method inside a loop and
659 then do a custom post-processing. In the worst case, the interface could
660 result incompatible with the analysis needs: in this case a redesign
661 hypotheses can be considered if the incompatibility is such that a large
662 piece of the analysis would be compromised. Redesigning is always a tricky
663 task, so it has to be considered as a last option.
664 \item Take care about other people. If you plan to write a code that is
665 likely to be used and modified by other people, please take your time to
666 write the documentation. Documenting the code is a boring and time-consuming
667 task, but can save you and your colleagues a lot of headaches and
668 misunderstandings. The better a code is documented, the lesser are the
669 questions other people will ask you.
670 \end{itemize}
671
672
673 \end{document}

  ViewVC Help
Powered by ViewVC 1.1.23