The overall functions should be used in the sequence below

Each of the functions calls sub functions in the flow presented in the graph below


(old, replaced)


An example of the original image is shown below,


FilterBackground(image): as the image is messy, this function take the connected convexHull to clean up everything outside this hull.

image (numpy 2d uint8 array): the original image

GetEventPositions(pic,debug_mode=0):get all the three tip points and also the vertex point

pic (numpy 2d uint8 array): the original image
debug_mode (bool): plot some debug features, this should be turned off in batch mode

return (numpy array 3*2, (float)*2): three tip points, and the vertex point

GetEventPositions_(pic,debug_mode=0, center_width = 12, quadrant_thresh=100, center_thresh=300, err_thresh =12, spread_thresh=6 ): get all the three tip points and also the vertex point  (currently not used)

pic (numpy 2d uint8 array): the original image
debug_mode (bool): plot some debug features, this should be turned off in batch mode
center_width (float): not used for now
quadrant_thresh (float): threshold for number of pixel in the reaction product part
center_thresh (float): threshold for the beam part
err_thresh (float):  threshold for average distance to the fit
spread_thresh (float): threshold for x,y spread out


AveDist(x,y,k,b): calculate the average distance from (x,y) to a straight line with (k,b) parameters.

x (numpy float array): the x positions
y (numpy float array): the y positions
k (float): the slope of the line
b (float): the y-intercept of the line

return (float): average distance from (x,y) to the line (k,b)

r2(x,y,k,b): just to calculate the r2 score of the fitting

parameters are the same to function above

return (float): r2 score


VertexPos_(fits,y0): using all fitting results and the y position from the right most tip point to estimate the vertex position (currently not sued)

The function divide the calculation to 2 scenarios. 1. you have 2 or 3 fitted lines, then you just pick the parameters of the first two lines for the calculation. 2. if you have only 1 line and this one will not be you center line (because of previous fitting condition), you assume the center line is straight on y0.

fits ([int,float,float,float]*3): fit results for three parts of the image

return (float,float): (x,y)

VertexPos(image_,ps): estimate the vertex position

The function calculate the minimal distance for each pixel points to each of the three lines (semi-open rays from the vertex point to the direction of the tip point). If a pixel is on the closed end of the ray, a high penalty (1e5) will be given in distance calculation.

image_ (numpy 2d uint8 array): the filtered image

ps_ (numpy array 3×2): three positions of the tip points

return (float,float): (x,y)

tbjcfit(xs,ys): use SVD to calculate the least square DISTANCE (not y) fitting

xs (numpy float array): the x positions
ys (numpy float array): the y positions

return (int,float,float,float):(number of pixels, k,b,average distance)


GetFit(image_, part_thresh=60, err_thresh =1.2,spread_thresh=6): this function extract the x,y positions of each pixel above 0 value. Then fit the x,y points using tbjcfit. The results will be filtered through a few conditions to see if the fitting is good, like if the average distance from the points to the line is within the err_thresh and if the scattered positions does give a reasonable line shape distribution.

image_ (numpy 2d uint8 array): the part of the image you want to obtain a line fitting
part_thresh (float): if the number of pixels in the image is large enough for a fitting
err_thresh (float):the average distance to the fitting line of all the points
spread_thresh (float):the threshold for requiring a spread out distributed data on either x or y axis

return (numpy 2d uint8 array): a copy of filtered image




GetLineInfo(p1,p2, L_thre = -5): calculate the length and angle between two points

p1 (float,float): the tip point
p2 (float,float): the vertex point
L_thre = -5: not used for now

GetEventInfo(points,p0): calculate the length and angle for between each pair of the tip point and the vertex point

points ((float,float)*3): the positions of the tip points
p0 (float,float): the position of the vertex point

return ((float,float)*3,float): (theta,length) for each pair, and the reaction range

Distance(contours,n1,n2): calculate the minimum distance between two contour

contours [numpy.array (n,1,2)]:all the contours
n1 (int): index of the first contour
n2 (int): index of the second contour

return (float): the minimum distance

Groups(contours): combine all adjacent contours

contours [numpy.array (n,1,2)]:all the contours

return ((float)*n, (float)*n): grouped contours

convexHull(thresh, debug_mode = 0): calculated the convexHull using the largest grouped contour

thresh (numpy 2d uint8 array): image after preliminary processing

return ((float,float)*n): the hull points

MaxEnclosedTriangle(hull): calculate the maximum enclosed triangle using the hull points

hull ((float,float)*n): the hull points

return (int, int, int): index of the hull points to form the maximum enclosed triangle

TipFinder(thresh, debug_mode = 0): return the position of the tip on the maximum enclosed triangle

thresh (numpy 2d uint8 array): image after preliminary processing

return ((float,float)*3): the position of the tip on the maximum enclosed triangle

The file read in two SQLite databases, the data file and the map file.

In the ADCdf table, each of the entry is the signal trace for each of the 253 channel of the chamber.

The first preliminary filter is that for each signal trace, there are two threshold, 1. must be larger than the 20% of the larget amplitude; 2 must be larger than 20. Each of the time bin must be larger than both of the threshold.

A list of positions are extract from the traces (R,z), R is radial of the centroid of the pad , z is the number of the time bin.

The four quadrants are used to construct two images using each opposite pairs. The overlapping score (a overlapping on the edge (y=0 or 300) yield larger score than a overlapping in the center (y=150) ) is calculated to determine the direction of aligning the two images.

A enlarged reconstructed image is presented below

Besides that, as the image is fully contaminated by noisy data and disconnected points. The prepossessing steps consist GaussianBlur, threshold, erode and dilate.

__init__(self,data_path,map_path): initialization of DataFactory

data_path (str): the relative path to the data file
map_path (str): the relative path to the ATTPC map file

this module loads the ADC table into pandas spreadsheet. Then the function iterate through all channels to see at which time bin the signal amplitude is above threshold and store all the filtered signals into t3.


EID (int): the EventID of the event for constructing the image

This function takes the spreadsheet t3 from __init__ function, and then produce an image using the positions for each “EventID”.


Since most of the analysis work happens in the python code, it deserves a independent post for explaining all the code. So far the code will only works for two body reaction and regular kinematics, where both reaction products are forward focused. recontruct a 2D image using signal traces from each of the 253 channels and perform a first-order cleanup in this image contains functions to extract line features from the 2D image.
The package so far is producing reasonable results, which can be observed in the theta1 vs theta2 plot of the two reaction products


tbjcATTPCroot Installation Instructions

Finally, I think I should write a instruction of using my code.

The code is basically divided into two parts. The C++ convertor, which is inherited from Yassid’s ATTPCroot and python analysis code. Between those two, there is also a python script converting the root tree data files into SQLite.

The convertor and all other pyhton code needs to be installed differently.

Convertor :

The C++ part of the convertor converts a Raw binary data file to a Root tree file; then you need the python convertor to convert the Root tree file to a SQLite database.

Installation Requirements:  Root>6.xx, Boost, CMake, Linux environment, Python (Anaconda 2.x is recommended)

Installation steps:

### C++ part ###

to build:
mkdir build
cd build
cmake ..

### python part ###

### download the Anaconda 2.x
bash Anaconda-2.x.x-Linux-x86[_64].sh

How to run:

There are two ways of running the program.

  • single process: in the main folder, (the build/example will only run in the main folder)
./build/example <root file name> <Binary file name>  ## this convert binary data to root tree file
python tbjcConvertor/ <root file name> <SQLite database name> ## this convert the root tree file to SQLite database
  • multi process:


Tested System:

Ubuntu 16.04.1 LTS
Red Hat Enterprise Linux Server release 7.4 (Maipo)


The C++ code is a single thread and process program, which can be paralleled (multiprocess) through the

In, there are three variables need to be changed, all of the variables are RELATIVE DIRECTORY

parentPath: the relative directory of the parent folder of the DATA FOLDERS of the raw binary

SQLpath: where you want to store all your converted SQLite databases

paths: all the data folders you want to convert

Analysis Part:

Ideally, the analysis part of the C++ code should be working, including ATHoughTask, ATPSATask, ATAnalysisTask and ATPhiRecoTask. However, it is not commonly used in my analysis work.

What’s different?

As mentioned in the ReadMe file, the FairRoot part and the Root TCloneArray, is completely faked. The fake classes only provide basic functions of iterating tasks and storing temporary variables. However, from my observation, all the analysis tasks should be functioning.

Analysis program :

Though most of the programs are unfinished works, but they should give good insights of what the data look like.

The most complete program is the VertexAnalyzer, which reconstruct the


besides built-in Anaconda packages, you will also need opencv2 and seaborn (mainly for visualizing, when debug option is on)

conda install -c menpo opencv
conda install seaborn

Tested System:

Ubuntu 16.04.1 LTS
Red Hat Enterprise Linux Server release 7.4 (Maipo)


under the main folder, run for the multi-process mode


or run the jupyter notebook interactive mode

jupyter notebook

the will produce a text file contains a list of ranges for reaction length.


Log of installing ATTPC-ROOT (Yassid’s version) on CRC Notre Dame

Well, below does not work yet !!!!!!!!! I absolutely hate scientific software!!!!!

installing Fairsoft

before installation, my .bashrc file looks like below

# ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
# for examples

# If not running interactively, don't do anything
case $- in
    *i*) ;;
      *) return;;

if [ -r /opt/crc/Modules/current/init/bash ]; then
        source /opt/crc/Modules/current/init/bash

# don't put duplicate lines or lines starting with space in the history.
# See bash(1) for more options

# append to the history file, don't overwrite it
shopt -s histappend

# for setting history length see HISTSIZE and HISTFILESIZE in bash(1)

# check the window size after each command and, if necessary,
# update the values of LINES and COLUMNS.
shopt -s checkwinsize

# If set, the pattern "**" used in a pathname expansion context will
# match all files and zero or more directories and subdirectories.
#shopt -s globstar

# make less more friendly for non-text input files, see lesspipe(1)
[ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)"

# set variable identifying the chroot you work in (used in the prompt below)
if [ -z "${debian_chroot:-}" ] && [ -r /etc/debian_chroot ]; then
    debian_chroot=$(cat /etc/debian_chroot)

# set a fancy prompt (non-color, unless we know we "want" color)
case "$TERM" in
    xterm-color|*-256color) color_prompt=yes;;

# uncomment for a colored prompt, if the terminal has the capability; turned
# off by default to not distract the user: the focus in a terminal window
# should be on the output of commands, not on the prompt

if [ -n "$force_color_prompt" ]; then
    if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then
	# We have color support; assume it's compliant with Ecma-48
	# (ISO/IEC-6429). (Lack of such support is extremely rare, and such
	# a case would tend to support setf rather than setaf.)

if [ "$color_prompt" = yes ]; then
    PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '
    PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
unset color_prompt force_color_prompt

# If this is an xterm set the title to user@host:dir
case "$TERM" in
    PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1"

# enable color support of ls and also add handy aliases
if [ -x /usr/bin/dircolors ]; then
    test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)"
    alias ls='ls --color=auto'
    #alias dir='dir --color=auto'
    #alias vdir='vdir --color=auto'

    alias grep='grep --color=auto'
    alias fgrep='fgrep --color=auto'
    alias egrep='egrep --color=auto'

# colored GCC warnings and errors
#export GCC_COLORS='error=01;31:warning=01;35:note=01;36:caret=01;32:locus=01:quote=01'

# some more ls aliases
alias ll='ls -alF'
alias la='ls -A'
alias l='ls -CF'

# Add an "alert" alias for long running commands.  Use like so:
#   sleep 10; alert
alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"'

# Alias definitions.
# You may want to put all your additions into a separate file like
# ~/.bash_aliases, instead of adding them here directly.
# See /usr/share/doc/bash-doc/examples in the bash-doc package.

if [ -f ~/.bash_aliases ]; then
    . ~/.bash_aliases

# enable programmable completion features (you don't need to enable
# this, if it's already enabled in /etc/bash.bashrc and /etc/profile
# sources /etc/bash.bashrc).
if ! shopt -oq posix; then
  if [ -f /usr/share/bash-completion/bash_completion ]; then
    . /usr/share/bash-completion/bash_completion
  elif [ -f /etc/bash_completion ]; then
    . /etc/bash_completion

module load cmake/3.6.3

#module load cuda/7.5 # /8.0
module load git
module load geant/

#module load graphviz
#module load intel/16.0
#module load gcc/5.2.0
# define env variable

export VIMRUNTIME=~/.local/usr/share/vim/vim80

export LD_LIBRARY_PATH=~/.local/usr/local/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=~/.local/lapack-3.6.1/BUILD/lib:$LD_LIBRARY_PATH
export PATH=~/.local/usr/bin:$PATH
export PATH=~/.local/usr/local/bin:$PATH
export PATH=$HOME/MySQL/bin:$PATH
export PATH=$HOME/MySQL/scripts:$PATH
export PATH=/afs/$PATH
export CPATH=~/.local/usr/local/include:$CPATH

then, install libxml2 by

git clone git://
cd libxml2/
## set the prefix to the location of your local installation directory
./configure --prefix=/afs/
make install

then download fairsoft and do the following

mkdir ~/fair_install
cd ~/fair_install
mkdir FairSoftInst
git clone -b dev
cd FairSoft
## give the version of latest root
export ROOTVERSION=v6-10-02
# 1) gcc (on Linux) 5) Clang (on OSX)
# 2) No Debug Info
# 3) yes, root6 please
# 4) Internet (install G4 files from internet)
# 5) No python 
# path: I give full directory of FairSoftInst

After the installation, add to your .bashrc file

export SIMPATH=/afs/ PATH=/afs/$PATH 

installing FairROOT

then I did the following

## not quite sure if the following line is needed
export ROOTVERSION=v6-10-02
cd ~/fair_install
git clone -b dev
cd FairRoot
mkdir build
cd build
cmake -DCMAKE_INSTALL_PREFIX="~/fair_install/FairRootInst" ..
make install

well………….. the test should me that none of the tests are passed, so I have to way to see if the installation is successful, but at least there were no errors.