MCFAST Notes from the C0 Workshop
December 16, 1996
Summary of MCFAST Presentations
MCFAST Overview and History
MCFAST was conceived as a tool for designing a heavy quark
experiment at a hadron collider, with emphasis on the
tracking, vertexing and triggering subsystems.
Since that time some additional functionality has been added, making
it a more general tool for studying detector design.
In order to facilitate studying design changes,
the detector configuration is specified in an ASCII file.
MCFAST does a hit level
simulation, which is based on individual device resolutions and which
includes multiples scattering,
executing quickly enough to process millions of events per
"Individual device resolutions" are, for
example, the spatial resolution of a point in a pixel detector or the
drift distance resolution in a drift chamber.
One motivation for developing MCFAST was the recognition that
realistic B physics experiments at a hadron collider
would require large background suppression
factors and therefore could require that O(10e6) background
events be simulated in a complete design cycle. This
requirement sets the speed scale for a useful simulation tool for
such an experiment, a scale that a GEANT based simulation cannot meet.
The original vision of MCFAST included
the simulation of charged particle tracking, triggering and
vertexing. Since that time, the scope has been expanded to include
simulations of calorimetry and of particle ID; these simulations, however,
are higher level parameterized simulations, not the hit level
simulations which are performed for tracking.
At the C0 workshop several widespread misconceptions about
MCFAST surfaced. These appear to
arise from two sources: first, the capabilities of MCFAST have changed
with time; and second, there have been some problems due to different
understandings of the use of the word "parameterized". In order to
clarify the status of MCFAST, a history and the development plan will
The original MCFAST (v1.4 1994), was a wrapper around the
SLAC TRACKERR program. After some initial experience, TRACKERR was
found to have insufficient flexibility in its geometry and magnetic field
representations. MCFAST (v2.1, 1995) was rewritten
completely, retaining the essential algorithm of TRACKERR but
supporting much more flexibility in detector design.
The details of the algorithm,
common both to TRACKERR and MCFAST, is given in
The output of this algorithm is a simulation of the track parameters
and covariance matrix, including both the effects of measurement errors
and the effects of multiple scattering.
This approach had three significant
weaknesses. First, the hits all lie on an ideal trajectory and,
therefore, are not an adequate input to a trigger simulation.
Second, there is no simulation of errors in pattern recognition.
Thirdly, there are no non-gaussian tails in any of the hit resolutions.
Recently MCFAST (v2.5.1 1996--in beta test) was modified to implement
explicit multiple scattering during the
outward trace and hit generation step. MCFAST is now an
adequate tool to produce the input for a trigger simulation.
This test version was used for
the detailed trigger simulations presented at this workshop.
Ongoing work, begun in fall 1996 and scheduled for completion in early
- For technical design reasons, the new multiple scattering
code does not work in a detector which has both central
and forward tracking elements. As part of a larger project, described
below, Martin Lohner will remove this limitation.
- Rob Kutschke has begun new work on the hit bookkeeping and the Kalman
filter code; this work will allow for the simulation of pattern
recognition errors and for the simulation of device misalignment.
See Appendix B for further details.
- Non-gaussian tails in the measurements will also be implemented
in this release
The geometry is specified in an ASCII file. This makes it quick and
easy for the user to specify a new detector design and to study
variations on that design.
Calorimetry is supported for projective tower geometries shapes like tubes
and cones. The parametric simulation includes the fluctuation of the
conversion point and the longitudinal and lateral spread of the
of the showers. Muons and hadrons (before converting) behave as minimum
ionizing particles. Energy is deposited in the calorimeter towers and
smeared according to a resolution function.
Martin Lohner discussed the some of future MCFAST tracking development
work which was alluded to
above. The choice of FORTRAN with no dynamic
memory allocation has led to a lack of robustness in the detector
specifications as well as pointlessly large and wasteful executables. The
current geometry structures take up 10s of MBtyes when only 100s of KBytes
are needed. Classes and structures have been defined and are being
implemented in C++, with a FORTRAN binding being supplied. The initial
implementation will be available for testing in early 1997.
This re-implementation of the MCFAST tracking algorithm as well as a
proposed re-implementation of the calorimeter algorithm in C++ may fit
nicely into the GEANT4 framework to supply users with fast
algorithms as well as full GEANT simulations.
A final comment:
Always remember that MCFAST is designed
to be a tool for understanding broad detector design issues and
some selected detailed issues. It should be understood that
MCFAST performs well within those limitations. It is the responsibility
of the user to know if MCFAST is the appropriate tool for the problem
MCFAST Discussion at the C0 Workshop
A discussion session followed the presentations in which 3 problems with
MCFAST were identified:
- The existing muon simulation is much too naive. Currently punch through
modeled at all and pi/K decay in flight is always modeled as two distinct
The solutions to this are straightforward and is on the MCFAST to-do list.
This is certainly an area of MCFAST to which a volunteer could contribute.
- No simulation of RICH or other particle ID.
Code to simulate particular detectors exists but is not interfaced
to MCFAST. For C0 studies, the existing code should be put in MCFAST
as user code, analogous to the the trigger code.
- The simulation of multiple interactions assumes all events are in the same
bucket. This is an issue for the calorimeters in high pt detectors
operating at in a multiple luminosity environment.
This is not a C0 issue and is conceptually easy to fix.
The Original MCFAST Multiple Scattering Simulation
This is the guts of the original MCFAST tracking code. The essential
idea was taken from TRACKERR.
- Trace the track outward through the detector, following the
vacuum trajectory; that is, the there is no simulation of
multiple scattering at this stage.
- Compute the intersection of this trajectory with each
sensitive volume and compute the "measurement", for example the
position, and its error, of where the track crossed a silicon plane.
The output of this step is the generated hit list for each track.
- On this outward trace, store a sorted list of all of the materials
through which the track passed. The arc length in each material is
- Apply a simple model of efficiency and smearing to the generated
hit list. In this sense the simulation can said to be parameterized:
each generated hit has some device-dependent resolution associated with
it and the smearing is done according to that resolution. At present
tails in the resolution function are not simulated. ( Here a true,
full simulation would simulate such things as pulse sharing in
a silicon detector; once this is done clustering code is also needed. )
- The output of the preceding steps is an interleaved list of
measurements and scattering surfaces.
- Use the list assembled in 5) as the input to a Kalman filter.
Notice that this will include all of the scattering surfaces.
The output of this step is a set of track parameters and their
- Here is the critical point. In any track fitter, including a Kalman
filter, the covariance matrix
does not depend on the measurements! It only depends on the
measurement errors, the amount of multiple scattering and the
relative positions of the measurement points and the scattering
surfaces. On the other hand, the track parameters do
depend strongly on the measurements.
- So the covariance matrix created in 6) is very close to correct.
The errors which are made are that the relative positions of the
measurements and scattering surfaces are slightly off.
It is an excellent simulation of the covariance matrix which
would have been obtained by explicitly modeling multiple scattering
in both the outward and inward traces.
- Of course the track parameters created in 6) are complete junk.
So they are discarded and a new set of track parameters, chosen
from the measured covariance matrix, is thrown. It is these
track parameters which are reported to the user.
- The most significant weakness in the above scheme is that one
cannot properly model pattern recognition.
This enters at two levels:
- In the trigger. The cuts in the trigger algorithm must include
- random walk away from the vacuum trajectory ( multiple
- systematic deviation from the vacuum trajectory ( energy loss ).
- how does putting an incorrect hit(s) onto a track affect the
determination of the vertex parameters of the vertex from which
said track originates.
- A lesser weakness is the failure to model the following component
of the "real life" resolution function. When a particle goes through
the detector is traverses a particular set of scattering materials.
When we reconstruct the track we also must also build the list of
scattering materials through which the track passed. Whenever
a track passes near the edges or corners of some material, then
material list will sometimes be incorrect, either by omission of
some material or by inclusion of some incorrect material. This
is believed to be a negligible effect and, moreover, one which will
tend to cancel in mean ( but not in RMS ) in an ensemble of events.
Simulation of Pattern Recognition Errors
- The goal is to simulate the errors which attach incorrect hits
- First, generate the correct hit list for all tracks.
- Sprinkle noise hits throughout the detector. The algorithm
may be left as user code.
- Look for sets of tracks which have many close by hits.
Invent some algorithm for distributing the hits on tracks.
The important thing here is to understand the frequency with which
this problem occurs. In practice these tracks are likely to be
so poorly reconstructed as to be useless - so the details of the
misassignments are of much less importance than getting the
frequency of misassignment correct.
- Look at the remaining hits on tracks and look for true hits
which have nearby incorrect hits. Occasionally the incorrect
hits will be placed in the hit list and the correct hit removed.
- In the far distant future, one could imagine implementing
true pattern recognition code. But this is unlikely on the
time scale of mid 1997.