=== TMVAnalysis: use method(s)...
=== - <CutsGA>
=== - <Likelihood>
=== - <LikelihoodPCA>
=== - <PDERS>
=== - <KNN>
=== - <HMatrix>
=== - <Fisher>
=== - <FDA_MT>
=== - <MLP>
=== - <SVM_Gauss>
=== - <BDT>
=== - <BDTD>
=== - <RuleFit>

TMVA -- Toolkit for Multivariate Data Analysis
        Version 3.9.5, Aug 09, 2008
        Copyright (C) 2005-2008 CERN, MPI-K Heidelberg and Victoria U.
        Home page http://tmva.sourceforge.net
        All rights reserved, please read http://tmva.sf.net/license.txt

TMVAlogon: use "TMVA" style [TMVA style based on "Plain" with modifications defined in tmvaglob.C]

##########################################   TMVAout.root
--- Factory        : You are running ROOT Version: 5.20/00, Jun 24, 2008
--- Factory        : 
--- Factory        : _/_/_/_/_/ _|      _|  _|      _|    _|_|   
--- Factory        :    _/      _|_|  _|_|  _|      _|  _|    _| 
--- Factory        :   _/       _|  _|  _|  _|      _|  _|_|_|_| 
--- Factory        :  _/        _|      _|    _|  _|    _|    _| 
--- Factory        : _/         _|      _|      _|      _|    _| 
--- Factory        : 
--- Factory        : __________TMVA Version 3.9.5, Aug 09, 2008
--- Factory        : 
--- Factory        : Preparing trees for training and testing...
--- DataSet        : Parsing option string: 
--- DataSet        : "NSigTrain=10000000000:NBkgTrain=100000000000::NSigTest=2:NBkgTest=2:SplitMode=Alternate:NormMode=NumEvents:!V"
--- DataSet        : The following options are set:
--- DataSet        : - By User:
--- DataSet        :     SplitMode: "Alternate" [Method for selecting training and testing events (default: random)]
--- DataSet        :     NormMode: "NumEvents" [Overall renormalisation of event-by-event weights (NumEvents: average weight of 1 per event, independently for signal and background; EqualNumEvents: average weight of 1 per event for signal, and sum of weights for background equal to sum of weights for signal)]
--- DataSet        :     NSigTrain: "0" [Number of signal training events (default: 0 - all)]
--- DataSet        :     NBkgTrain: "0" [Number of background training events (default: 0 - all)]
--- DataSet        :     NSigTest: "2" [Number of signal testing events (default: 0 - all)]
--- DataSet        :     NBkgTest: "2" [Number of background testing events (default: 0 - all)]
--- DataSet        :     V: "False" [Verbose mode]
--- DataSet        : - Default:
--- DataSet        :     SplitSeed: "100" [Seed for random event shuffling]
--- DataSet        : Create training and testing trees: looping over signal events ...
--- DataSet        : Create training and testing trees: looping over background events ...
--- DataSet        : Prepare training and test samples:
--- DataSet        :     Signal tree     - number of events :    518
--- DataSet        :     Background tree - number of events :    825
--- DataSet        : Preselection:
--- DataSet        :     No preselection cuts applied on signal or background
--- DataSet        : Weight renormalisation mode: "NumEvents": renormalise signal and background
--- DataSet        :     weights independently so that Sum[i=1..N_j]{w_i} = N_j, j=signal, background
--- DataSet        :     (note that N_j is the sum of training and test events)
--- DataSet        : Event weights scaling factor:
--- DataSet        :     signal     : 5.7503
--- DataSet        :     background : 1.76908
--- DataSet        : Pick alternating training and test events from input tree for signal
--- DataSet        : Pick alternating training and test events from input tree for background
--- DataSet        : Create training tree
--- DataSet        : Create testing tree
--- DataSet        : Collected:
--- DataSet        : - Training signal entries     : 516
--- DataSet        : - Training background entries : 823
--- DataSet        : - Testing  signal entries     : 2
--- DataSet        : - Testing  background entries : 2
--- DataSet        : Compute correlation matrices on tree: TrainingTree
--- DataSet        : 
--- DataSet        : Correlation matrix (signal):
--- DataSet        : ---------------------------------------------------------------
--- DataSet        :                       HT  Jet1Pt DeltaRJet1Jet2 WTransverseMass
--- DataSet        :              HT:  +1.000  +0.932         +0.125          +0.181
--- DataSet        :          Jet1Pt:  +0.932  +1.000         +0.191          +0.038
--- DataSet        :  DeltaRJet1Jet2:  +0.125  +0.191         +1.000          -0.226
--- DataSet        : WTransverseMass:  +0.181  +0.038         -0.226          +1.000
--- DataSet        : ---------------------------------------------------------------
--- DataSet        : 
--- DataSet        : Correlation matrix (background):
--- DataSet        : ---------------------------------------------------------------
--- DataSet        :                       HT  Jet1Pt DeltaRJet1Jet2 WTransverseMass
--- DataSet        :              HT:  +1.000  +0.927         +0.141          +0.018
--- DataSet        :          Jet1Pt:  +0.927  +1.000         +0.214          -0.069
--- DataSet        :  DeltaRJet1Jet2:  +0.141  +0.214         +1.000          -0.257
--- DataSet        : WTransverseMass:  +0.018  -0.069         -0.257          +1.000
--- DataSet        : ---------------------------------------------------------------
--- DataSet        : 
--- DataSet        : New variable Transformation NoTransform requested and created.
--- TransBase      : Create scatter and profile plots in target-file directory: 
--- TransBase      : TMVAout.root:/InputVariables_NoTransform/CorrelationPlots
--- TransBase      : Ranking input variables...
--- NoTransform    : Ranking result (top variable is best ranked)
--- NoTransform    : ----------------------------------------------------------------
--- NoTransform    : Rank : Variable        : Separation
--- NoTransform    : ----------------------------------------------------------------
--- NoTransform    :    1 : HT              : 2.563e-01
--- NoTransform    :    2 : Jet1Pt          : 2.270e-01
--- NoTransform    :    3 : WTransverseMass : 7.929e-02
--- NoTransform    :    4 : DeltaRJet1Jet2  : 6.387e-02
--- NoTransform    : ----------------------------------------------------------------
--- Cuts           : Parsing option string: 
--- Cuts           : "H:!V:FitMethod=GA:EffSel:Steps=30:Cycles=3:PopSize=100:SC_steps=10:SC_rate=5:SC_factor=0.95:VarProp=FSmart"
--- Cuts           : The following options are set:
--- Cuts           : - By User:
--- Cuts           :     V: "False" [Verbose mode]
--- Cuts           :     H: "True" [Print classifier-specific help message]
--- Cuts           :     FitMethod: "GA" [Minimization Method (GA, SA, and MC are the primary methods to be used; the others have been introduced for testing purposes and are depreciated)]
--- Cuts           :     EffMethod: "EffSel" [Selection Method]
--- Cuts           : - Default:
--- Cuts           :     D: "False" [Use-decorrelated-variables flag (depreciated)]
--- Cuts           :     Normalise: "False" [Normalise input variables]
--- Cuts           :     VarTransform: "None" [Variable transformation method]
--- Cuts           :     VarTransformType: "Signal" [Use signal or background events to derive for variable transformation (the transformation is applied on both types of, course)]
--- Cuts           :     NbinsMVAPdf: "60" [Number of bins used for the PDFs of classifier outputs]
--- Cuts           :     NsmoothMVAPdf: "2" [Number of smoothing iterations for classifier PDFs]
--- Cuts           :     VerboseLevel: "Default" [Verbosity level]
--- Cuts           :     CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
--- Cuts           :     TxtWeightFilesOnly: "True" [If True: write all training results (weights) as text files (False: some are written in ROOT format)]
--- Cuts           :     CutRangeMin[0]: "-1" [Minimum of allowed cut range (set per variable)]
--- Cuts           :     CutRangeMin[1]: "-1"
--- Cuts           :     CutRangeMin[2]: "-1"
--- Cuts           :     CutRangeMin[3]: "-1"
--- Cuts           :     CutRangeMax[0]: "-1" [Maximum of allowed cut range (set per variable)]
--- Cuts           :     CutRangeMax[1]: "-1"
--- Cuts           :     CutRangeMax[2]: "-1"
--- Cuts           :     CutRangeMax[3]: "-1"
--- Cuts           :     VarProp[0]: "FSmart" [Categorisation of cuts]
--- Cuts           :     VarProp[1]: "FSmart"
--- Cuts           :     VarProp[2]: "FSmart"
--- Cuts           :     VarProp[3]: "FSmart"
--- Cuts           : Use optimization method: 'Genetic Algorithm'
--- Cuts           : Use efficiency computation method: 'Event Selection'
--- Cuts           : Use "FSmart" cuts for variable: 'HT'
--- Cuts           : Use "FSmart" cuts for variable: 'Jet1Pt'
--- Cuts           : Use "FSmart" cuts for variable: 'DeltaRJet1Jet2'
--- Cuts           : Use "FSmart" cuts for variable: 'WTransverseMass'
--- Factory        : Booking method: CutsGA
--- Likelihood     : Parsing option string: 
--- Likelihood     : "!H:!V:!TransformOutput:PDFInterpol=Spline2:NSmoothSig[0]=10:NSmoothBkg[0]=10:NSmoothBkg[1]=10:NSmooth=10:NAvEvtPerBin=50"
--- Likelihood     : The following options are set:
--- Likelihood     : - By User:
--- Likelihood     :     V: "False" [Verbose mode]
--- Likelihood     :     H: "False" [Print classifier-specific help message]
--- Likelihood     :     NSmooth: "10" [Number of smoothing iterations for the input histograms]
--- Likelihood     :     NAvEvtPerBin: "50" [Average number of events per PDF bin]
--- Likelihood     :     TransformOutput: "False" [Transform likelihood output by inverse sigmoid function]
--- Likelihood     : - Default:
--- Likelihood     :     D: "False" [Use-decorrelated-variables flag (depreciated)]
--- Likelihood     :     Normalise: "False" [Normalise input variables]
--- Likelihood     :     VarTransform: "None" [Variable transformation method]
--- Likelihood     :     VarTransformType: "Signal" [Use signal or background events to derive for variable transformation (the transformation is applied on both types of, course)]
--- Likelihood     :     NbinsMVAPdf: "60" [Number of bins used for the PDFs of classifier outputs]
--- Likelihood     :     NsmoothMVAPdf: "2" [Number of smoothing iterations for classifier PDFs]
--- Likelihood     :     VerboseLevel: "Default" [Verbosity level]
--- Likelihood     :     CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
--- Likelihood     :     TxtWeightFilesOnly: "True" [If True: write all training results (weights) as text files (False: some are written in ROOT format)]
--- Likelihood     :     NSmoothSig[0]: "10" [Number of smoothing iterations for the input histograms]
--- Likelihood     :     NSmoothSig[1]: "-1"
--- Likelihood     :     NSmoothSig[2]: "-1"
--- Likelihood     :     NSmoothSig[3]: "-1"
--- Likelihood     :     NSmoothBkg[0]: "10" [Number of smoothing iterations for the input histograms]
--- Likelihood     :     NSmoothBkg[1]: "10"
--- Likelihood     :     NSmoothBkg[2]: "-1"
--- Likelihood     :     NSmoothBkg[3]: "-1"
--- Likelihood     :     NAvEvtPerBinSig[0]: "-1" [Average num of events per PDF bin and variable (signal)]
--- Likelihood     :     NAvEvtPerBinSig[1]: "-1"
--- Likelihood     :     NAvEvtPerBinSig[2]: "-1"
--- Likelihood     :     NAvEvtPerBinSig[3]: "-1"
--- Likelihood     :     NAvEvtPerBinBkg[0]: "-1" [Average num of events per PDF bin and variable (background)]
--- Likelihood     :     NAvEvtPerBinBkg[1]: "-1"
--- Likelihood     :     NAvEvtPerBinBkg[2]: "-1"
--- Likelihood     :     NAvEvtPerBinBkg[3]: "-1"
--- Likelihood     :     PDFInterpol[0]: "Spline2" [Method of interpolating reference histograms (e.g. Spline2 or KDE)]
--- Likelihood     :     PDFInterpol[1]: "Spline2"
--- Likelihood     :     PDFInterpol[2]: "Spline2"
--- Likelihood     :     PDFInterpol[3]: "Spline2"
--- Likelihood     :     KDEtype: "Gauss" [KDE kernel type (1=Gauss)]
--- Likelihood     :     KDEiter: "Nonadaptive" [Number of iterations (1=non-adaptive, 2=adaptive)]
--- Likelihood     :     KDEFineFactor: "1" [Fine tuning factor for Adaptive KDE: Factor to multyply the width of the kernel]
--- Likelihood     :     KDEborder: "None" [Border effects treatment (1=no treatment , 2=kernel renormalization, 3=sample mirroring)]
--- Factory        : Booking method: Likelihood
--- Likelihood     : Parsing option string: 
--- Likelihood     : "!H:!V:!TransformOutput:PDFInterpol=Spline2:NSmoothSig[0]=10:NSmoothBkg[0]=10:NSmooth=5:NAvEvtPerBin=50:VarTransform=PCA"
--- Likelihood     : The following options are set:
--- Likelihood     : - By User:
--- Likelihood     :     VarTransform: "PCA" [Variable transformation method]
--- Likelihood     :     V: "False" [Verbose mode]
--- Likelihood     :     H: "False" [Print classifier-specific help message]
--- Likelihood     :     NSmooth: "5" [Number of smoothing iterations for the input histograms]
--- Likelihood     :     NAvEvtPerBin: "50" [Average number of events per PDF bin]
--- Likelihood     :     TransformOutput: "False" [Transform likelihood output by inverse sigmoid function]
--- Likelihood     : - Default:
--- Likelihood     :     D: "False" [Use-decorrelated-variables flag (depreciated)]
--- Likelihood     :     Normalise: "False" [Normalise input variables]
--- Likelihood     :     VarTransformType: "Signal" [Use signal or background events to derive for variable transformation (the transformation is applied on both types of, course)]
--- Likelihood     :     NbinsMVAPdf: "60" [Number of bins used for the PDFs of classifier outputs]
--- Likelihood     :     NsmoothMVAPdf: "2" [Number of smoothing iterations for classifier PDFs]
--- Likelihood     :     VerboseLevel: "Default" [Verbosity level]
--- Likelihood     :     CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
--- Likelihood     :     TxtWeightFilesOnly: "True" [If True: write all training results (weights) as text files (False: some are written in ROOT format)]
--- Likelihood     :     NSmoothSig[0]: "10" [Number of smoothing iterations for the input histograms]
--- Likelihood     :     NSmoothSig[1]: "-1"
--- Likelihood     :     NSmoothSig[2]: "-1"
--- Likelihood     :     NSmoothSig[3]: "-1"
--- Likelihood     :     NSmoothBkg[0]: "10" [Number of smoothing iterations for the input histograms]
--- Likelihood     :     NSmoothBkg[1]: "-1"
--- Likelihood     :     NSmoothBkg[2]: "-1"
--- Likelihood     :     NSmoothBkg[3]: "-1"
--- Likelihood     :     NAvEvtPerBinSig[0]: "-1" [Average num of events per PDF bin and variable (signal)]
--- Likelihood     :     NAvEvtPerBinSig[1]: "-1"
--- Likelihood     :     NAvEvtPerBinSig[2]: "-1"
--- Likelihood     :     NAvEvtPerBinSig[3]: "-1"
--- Likelihood     :     NAvEvtPerBinBkg[0]: "-1" [Average num of events per PDF bin and variable (background)]
--- Likelihood     :     NAvEvtPerBinBkg[1]: "-1"
--- Likelihood     :     NAvEvtPerBinBkg[2]: "-1"
--- Likelihood     :     NAvEvtPerBinBkg[3]: "-1"
--- Likelihood     :     PDFInterpol[0]: "Spline2" [Method of interpolating reference histograms (e.g. Spline2 or KDE)]
--- Likelihood     :     PDFInterpol[1]: "Spline2"
--- Likelihood     :     PDFInterpol[2]: "Spline2"
--- Likelihood     :     PDFInterpol[3]: "Spline2"
--- Likelihood     :     KDEtype: "Gauss" [KDE kernel type (1=Gauss)]
--- Likelihood     :     KDEiter: "Nonadaptive" [Number of iterations (1=non-adaptive, 2=adaptive)]
--- Likelihood     :     KDEFineFactor: "1" [Fine tuning factor for Adaptive KDE: Factor to multyply the width of the kernel]
--- Likelihood     :     KDEborder: "None" [Border effects treatment (1=no treatment , 2=kernel renormalization, 3=sample mirroring)]
--- DataSet        : New variable Transformation PCATransform requested and created.
--- TransBase      : Create scatter and profile plots in target-file directory: 
--- TransBase      : TMVAout.root:/InputVariables_PCATransform/CorrelationPlots
--- TransBase      : Ranking input variables...
--- PCATransform   : Ranking result (top variable is best ranked)
--- PCATransform   : ----------------------------------------------------------------
--- PCATransform   : Rank : Variable        : Separation
--- PCATransform   : ----------------------------------------------------------------
--- PCATransform   :    1 : HT              : 2.483e-01
--- PCATransform   :    2 : DeltaRJet1Jet2  : 9.027e-02
--- PCATransform   :    3 : Jet1Pt          : 8.891e-02
--- PCATransform   :    4 : WTransverseMass : 5.416e-02
--- PCATransform   : ----------------------------------------------------------------
--- Likelihood     : Use principal component transformation
--- Factory        : Booking method: LikelihoodPCA
--- PDERS          : Parsing option string: 
--- PDERS          : "!H:!V:NormTree=T:VolumeRangeMode=Adaptive:KernelEstimator=Gauss:GaussSigma=0.3:NEventsMin=400:NEventsMax=600"
--- PDERS          : The following options are set:
--- PDERS          : - By User:
--- PDERS          :     V: "False" [Verbose mode]
--- PDERS          :     H: "False" [Print classifier-specific help message]
--- PDERS          :     VolumeRangeMode: "Adaptive" [Method to determine volume size]
--- PDERS          :     KernelEstimator: "Gauss" [Kernel estimation function]
--- PDERS          :     NEventsMin: "400" [nEventsMin for adaptive volume range]
--- PDERS          :     NEventsMax: "600" [nEventsMax for adaptive volume range]
--- PDERS          :     GaussSigma: "0.3" [Width (wrt volume size) of Gaussian kernel estimator]
--- PDERS          :     NormTree: "True" [Normalize binary search tree]
--- PDERS          : - Default:
--- PDERS          :     D: "False" [Use-decorrelated-variables flag (depreciated)]
--- PDERS          :     Normalise: "False" [Normalise input variables]
--- PDERS          :     VarTransform: "None" [Variable transformation method]
--- PDERS          :     VarTransformType: "Signal" [Use signal or background events to derive for variable transformation (the transformation is applied on both types of, course)]
--- PDERS          :     NbinsMVAPdf: "60" [Number of bins used for the PDFs of classifier outputs]
--- PDERS          :     NsmoothMVAPdf: "2" [Number of smoothing iterations for classifier PDFs]
--- PDERS          :     VerboseLevel: "Default" [Verbosity level]
--- PDERS          :     CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
--- PDERS          :     TxtWeightFilesOnly: "True" [If True: write all training results (weights) as text files (False: some are written in ROOT format)]
--- PDERS          :     DeltaFrac: "3" [nEventsMin/Max for minmax and rms volume range]
--- PDERS          :     MaxVIterations: "150" [MaxVIterations for adaptive volume range]
--- PDERS          :     InitialScale: "0.99" [InitialScale for adaptive volume range]
--- Factory        : Booking method: PDERS
--- KNN            : Parsing option string: 
--- KNN            : "nkNN=400:TreeOptDepth=6:ScaleFrac=0.8:!UseKernel:!Trim"
--- KNN            : The following options are set:
--- KNN            : - By User:
--- KNN            :     nkNN: "400" [Number of k-nearest neighbors]
--- KNN            :     TreeOptDepth: "6" [Binary tree optimisation depth]
--- KNN            :     ScaleFrac: "0.8" [Fraction of events used for scaling]
--- KNN            :     UseKernel: "False" [Use polynomial kernel weight]
--- KNN            :     Trim: "False" [Use equal number of signal and background events]
--- KNN            : - Default:
--- KNN            :     D: "False" [Use-decorrelated-variables flag (depreciated)]
--- KNN            :     Normalise: "False" [Normalise input variables]
--- KNN            :     VarTransform: "None" [Variable transformation method]
--- KNN            :     VarTransformType: "Signal" [Use signal or background events to derive for variable transformation (the transformation is applied on both types of, course)]
--- KNN            :     NbinsMVAPdf: "60" [Number of bins used for the PDFs of classifier outputs]
--- KNN            :     NsmoothMVAPdf: "2" [Number of smoothing iterations for classifier PDFs]
--- KNN            :     V: "False" [Verbose mode]
--- KNN            :     VerboseLevel: "Default" [Verbosity level]
--- KNN            :     H: "False" [Print classifier-specific help message]
--- KNN            :     CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
--- KNN            :     TxtWeightFilesOnly: "True" [If True: write all training results (weights) as text files (False: some are written in ROOT format)]
--- Factory        : Booking method: KNN
--- HMatrix        : Parsing option string: 
--- HMatrix        : "!H:!V"
--- HMatrix        : The following options are set:
--- HMatrix        : - By User:
--- HMatrix        :     V: "False" [Verbose mode]
--- HMatrix        :     H: "False" [Print classifier-specific help message]
--- HMatrix        : - Default:
--- HMatrix        :     D: "False" [Use-decorrelated-variables flag (depreciated)]
--- HMatrix        :     Normalise: "True" [Normalise input variables]
--- HMatrix        :     VarTransform: "None" [Variable transformation method]
--- HMatrix        :     VarTransformType: "Signal" [Use signal or background events to derive for variable transformation (the transformation is applied on both types of, course)]
--- HMatrix        :     NbinsMVAPdf: "60" [Number of bins used for the PDFs of classifier outputs]
--- HMatrix        :     NsmoothMVAPdf: "2" [Number of smoothing iterations for classifier PDFs]
--- HMatrix        :     VerboseLevel: "Default" [Verbosity level]
--- HMatrix        :     CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
--- HMatrix        :     TxtWeightFilesOnly: "True" [If True: write all training results (weights) as text files (False: some are written in ROOT format)]
--- Factory        : Booking method: HMatrix
--- Fisher         : Parsing option string: 
--- Fisher         : "H:!V:!Normalise:CreateMVAPdfs:Fisher:NbinsMVAPdf=50:NsmoothMVAPdf=1"
--- Fisher         : The following options are set:
--- Fisher         : - By User:
--- Fisher         :     Normalise: "False" [Normalise input variables]
--- Fisher         :     NbinsMVAPdf: "50" [Number of bins used for the PDFs of classifier outputs]
--- Fisher         :     NsmoothMVAPdf: "1" [Number of smoothing iterations for classifier PDFs]
--- Fisher         :     V: "False" [Verbose mode]
--- Fisher         :     H: "True" [Print classifier-specific help message]
--- Fisher         :     CreateMVAPdfs: "True" [Create PDFs for classifier outputs (signal and background)]
--- Fisher         :     Method: "Fisher" [Discrimination method]
--- Fisher         : - Default:
--- Fisher         :     D: "False" [Use-decorrelated-variables flag (depreciated)]
--- Fisher         :     VarTransform: "None" [Variable transformation method]
--- Fisher         :     VarTransformType: "Signal" [Use signal or background events to derive for variable transformation (the transformation is applied on both types of, course)]
--- Fisher         :     VerboseLevel: "Default" [Verbosity level]
--- Fisher         :     TxtWeightFilesOnly: "True" [If True: write all training results (weights) as text files (False: some are written in ROOT format)]
--- Factory        : Booking method: Fisher
--- FDA            : Parsing option string: 
--- FDA            : "H:!V:Formula=(0)+(1)*x0+(2)*x1+(3)*x2+(4)*x3:ParRanges=(-1,1);(-10,10);(-10,10);(-10,10);(-10,10):FitMethod=MINUIT:ErrorLevel=1:PrintLevel=-1:FitStrategy=2:UseImprove:UseMinos:SetBatch"
--- FDA            : The following options are set:
--- FDA            : - By User:
--- FDA            :     V: "False" [Verbose mode]
--- FDA            :     H: "True" [Print classifier-specific help message]
--- FDA            :     Formula: "(0)+(1)*x0+(2)*x1+(3)*x2+(4)*x3" [The discrimination formula]
--- FDA            :     ParRanges: "(-1,1);(-10,10);(-10,10);(-10,10);(-10,10)" [Parameter ranges]
--- FDA            :     FitMethod: "MINUIT" [Optimisation Method]
--- FDA            : - Default:
--- FDA            :     D: "False" [Use-decorrelated-variables flag (depreciated)]
--- FDA            :     Normalise: "False" [Normalise input variables]
--- FDA            :     VarTransform: "None" [Variable transformation method]
--- FDA            :     VarTransformType: "Signal" [Use signal or background events to derive for variable transformation (the transformation is applied on both types of, course)]
--- FDA            :     NbinsMVAPdf: "60" [Number of bins used for the PDFs of classifier outputs]
--- FDA            :     NsmoothMVAPdf: "2" [Number of smoothing iterations for classifier PDFs]
--- FDA            :     VerboseLevel: "Default" [Verbosity level]
--- FDA            :     CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
--- FDA            :     TxtWeightFilesOnly: "True" [If True: write all training results (weights) as text files (False: some are written in ROOT format)]
--- FDA            :     Converger: "None" [FitMethod uses Converger to improve result]
--- FDA            : User-defined formula string       : "(0)+(1)*x0+(2)*x1+(3)*x2+(4)*x3"
--- FDA            : TFormula-compatible formula string: "[0]+[1]*[5]+[2]*[6]+[3]*[7]+[4]*[8]"
--- FDA            : Creating and compiling formula
--- FDA_Fitter_M...: Parsing option string: 
--- FDA_Fitter_M...: "!H:!V:!Formula=(0)+(1)*x0+(2)*x1+(3)*x2+(4)*x3:!ParRanges=(-1,1);(-10,10);(-10,10);(-10,10);(-10,10):!FitMethod=MINUIT:ErrorLevel=1:PrintLevel=-1:FitStrategy=2:UseImprove:UseMinos:SetBatch"
--- FDA_Fitter_M...: The following options are set:
--- FDA_Fitter_M...: - By User:
--- FDA_Fitter_M...:     ErrorLevel: "1" [TMinuit: error level: 0.5=logL fit, 1=chi-squared fit]
--- FDA_Fitter_M...:     PrintLevel: "-1" [TMinuit: output level: -1=least, 0, +1=all garbage]
--- FDA_Fitter_M...:     FitStrategy: "2" [TMinuit: fit strategy: 2=best]
--- FDA_Fitter_M...:     UseImprove: "True" [TMinuit: use IMPROVE]
--- FDA_Fitter_M...:     UseMinos: "True" [TMinuit: use MINOS]
--- FDA_Fitter_M...:     SetBatch: "True" [TMinuit: use batch mode]
--- FDA_Fitter_M...: - Default:
--- FDA_Fitter_M...:     PrintWarnings: "False" [TMinuit: suppress warnings]
--- FDA_Fitter_M...:     MaxCalls: "1000" [TMinuit: approximate maximum number of function calls]
--- FDA_Fitter_M...:     Tolerance: "0.1" [TMinuit: tolerance to the function value at the minimum]
--- Factory        : Booking method: FDA_MT
--- MLP            : Parsing option string: 
--- MLP            : "H:!V:!Normalise:NeuronType=tanh:NCycles=200:HiddenLayers=N+1,N:TestRate=5"
--- MLP            : The following options are set:
--- MLP            : - By User:
--- MLP            :     Normalise: "False" [Normalise input variables]
--- MLP            :     V: "False" [Verbose mode]
--- MLP            :     H: "True" [Print classifier-specific help message]
--- MLP            :     NCycles: "200" [Number of training cycles]
--- MLP            :     HiddenLayers: "N+1,N" [Specification of hidden layer architecture (N stands for number of variables; any integers may also be used)]
--- MLP            :     NeuronType: "tanh" [Neuron activation function type]
--- MLP            :     TestRate: "5" [Test for overtraining performed at each #th epochs]
--- MLP            : - Default:
--- MLP            :     D: "False" [Use-decorrelated-variables flag (depreciated)]
--- MLP            :     VarTransform: "None" [Variable transformation method]
--- MLP            :     VarTransformType: "Signal" [Use signal or background events to derive for variable transformation (the transformation is applied on both types of, course)]
--- MLP            :     NbinsMVAPdf: "60" [Number of bins used for the PDFs of classifier outputs]
--- MLP            :     NsmoothMVAPdf: "2" [Number of smoothing iterations for classifier PDFs]
--- MLP            :     VerboseLevel: "Default" [Verbosity level]
--- MLP            :     CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
--- MLP            :     TxtWeightFilesOnly: "True" [If True: write all training results (weights) as text files (False: some are written in ROOT format)]
--- MLP            :     NeuronInputType: "sum" [Neuron input function type]
--- MLP            :     TrainingMethod: "BP" [Train with Back-Propagation (BP - default) or Genetic Algorithm (GA - slower and worse)]
--- MLP            :     LearningRate: "0.02" [ANN learning rate parameter]
--- MLP            :     DecayRate: "0.01" [Decay rate for learning parameter]
--- MLP            :     BPMode: "sequential" [Back-propagation learning mode: sequential or batch]
--- MLP            :     BatchSize: "-1" [Batch size: number of events/batch, only set if in Batch Mode, -1 for BatchSize=number_of_events]
--- MLP            : Building Network
--- MLP            : Initializing weights
--- Factory        : Booking method: MLP
--- SVM            : Parsing option string: 
--- SVM            : "Sigma=2:C=1:Tol=0.001:Kernel=Gauss"
--- SVM            : The following options are set:
--- SVM            : - By User:
--- SVM            :     C: "1" [C parameter]
--- SVM            :     Tol: "0.001" [Tolerance parameter]
--- SVM            :     Sigma: "2" [Kernel parameter: sigma]
--- SVM            :     Kernel: "Gauss" [Uses kernel function]
--- SVM            : - Default:
--- SVM            :     D: "False" [Use-decorrelated-variables flag (depreciated)]
--- SVM            :     Normalise: "True" [Normalise input variables]
--- SVM            :     VarTransform: "None" [Variable transformation method]
--- SVM            :     VarTransformType: "Signal" [Use signal or background events to derive for variable transformation (the transformation is applied on both types of, course)]
--- SVM            :     NbinsMVAPdf: "60" [Number of bins used for the PDFs of classifier outputs]
--- SVM            :     NsmoothMVAPdf: "2" [Number of smoothing iterations for classifier PDFs]
--- SVM            :     V: "False" [Verbose mode]
--- SVM            :     VerboseLevel: "Default" [Verbosity level]
--- SVM            :     H: "False" [Print classifier-specific help message]
--- SVM            :     CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
--- SVM            :     TxtWeightFilesOnly: "True" [If True: write all training results (weights) as text files (False: some are written in ROOT format)]
--- SVM            :     MaxIter: "1000" [Maximum number of training loops]
--- SVM            :     Order: "3" [Polynomial Kernel parameter: polynomial order]
--- SVM            :     Theta: "1" [Sigmoid Kernel parameter: theta]
--- SVM            :     Kappa: "1" [Sigmoid Kernel parameter: kappa]
--- Factory        : Booking method: SVM_Gauss
--- BDT            : Parsing option string: 
--- BDT            : "!H:!V:NTrees=400:BoostType=AdaBoost:SeparationType=GiniIndex:nCuts=20:PruneMethod=CostComplexity:PruneStrength=1.5"
--- BDT            : The following options are set:
--- BDT            : - By User:
--- BDT            :     V: "False" [Verbose mode]
--- BDT            :     H: "False" [Print classifier-specific help message]
--- BDT            :     NTrees: "400" [Number of trees in the forest]
--- BDT            :     BoostType: "AdaBoost" [Boosting type for the trees in the forest]
--- BDT            :     SeparationType: "GiniIndex" [Separation criterion for node splitting]
--- BDT            :     nCuts: "20" [Number of steps during node cut optimisation]
--- BDT            :     PruneStrength: "1.5" [Pruning strength]
--- BDT            :     PruneMethod: "CostComplexity" [Method used for pruning (removal) of statistically insignificant branches]
--- BDT            : - Default:
--- BDT            :     D: "False" [Use-decorrelated-variables flag (depreciated)]
--- BDT            :     Normalise: "False" [Normalise input variables]
--- BDT            :     VarTransform: "None" [Variable transformation method]
--- BDT            :     VarTransformType: "Signal" [Use signal or background events to derive for variable transformation (the transformation is applied on both types of, course)]
--- BDT            :     NbinsMVAPdf: "60" [Number of bins used for the PDFs of classifier outputs]
--- BDT            :     NsmoothMVAPdf: "2" [Number of smoothing iterations for classifier PDFs]
--- BDT            :     VerboseLevel: "Default" [Verbosity level]
--- BDT            :     CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
--- BDT            :     TxtWeightFilesOnly: "True" [If True: write all training results (weights) as text files (False: some are written in ROOT format)]
--- BDT            :     AdaBoostBeta: "1" [Parameter for AdaBoost algorithm]
--- BDT            :     UseRandomisedTrees: "False" [Choose at each node splitting a random set of variables]
--- BDT            :     UseNvars: "4" [Number of variables used if randomised Tree option is chosen]
--- BDT            :     UseWeightedTrees: "True" [Use weighted trees or simple average in classification from the forest]
--- BDT            :     UseYesNoLeaf: "True" [Use Sig or Bkg categories, or the purity=S/(S+B) as classification of the leaf node]
--- BDT            :     NodePurityLimit: "0.5" [In boosting/pruning, nodes with purity > NodePurityLimit are signal; background otherwise.]
--- BDT            :     nEventsMin: "20" [Minimum number of events required in a leaf node (default: max(20, N_train/(Nvar^2)/10) ) ]
--- BDT            :     PruneBeforeBoost: "False" [Flag to prune the tree before applying boosting algorithm]
--- BDT            :     NoNegWeightsInTraining: "False" [Ignore negative event weights in the training process]
--- BDT            : Events with negative event weights are ignored during the BDT training (option NoNegWeightsInTraining=0
--- BDT            : <InitEventSample> Internally I use 1339 for Training  and 0 for Validation 
--- Factory        : Booking method: BDT
--- BDT            : Parsing option string: 
--- BDT            : "!H:!V:NTrees=400:BoostType=AdaBoost:SeparationType=GiniIndex:nCuts=20:PruneMethod=CostComplexity:PruneStrength=1.5:VarTransform=Decorrelate"
--- BDT            : The following options are set:
--- BDT            : - By User:
--- BDT            :     VarTransform: "Decorrelate" [Variable transformation method]
--- BDT            :     V: "False" [Verbose mode]
--- BDT            :     H: "False" [Print classifier-specific help message]
--- BDT            :     NTrees: "400" [Number of trees in the forest]
--- BDT            :     BoostType: "AdaBoost" [Boosting type for the trees in the forest]
--- BDT            :     SeparationType: "GiniIndex" [Separation criterion for node splitting]
--- BDT            :     nCuts: "20" [Number of steps during node cut optimisation]
--- BDT            :     PruneStrength: "1.5" [Pruning strength]
--- BDT            :     PruneMethod: "CostComplexity" [Method used for pruning (removal) of statistically insignificant branches]
--- BDT            : - Default:
--- BDT            :     D: "False" [Use-decorrelated-variables flag (depreciated)]
--- BDT            :     Normalise: "False" [Normalise input variables]
--- BDT            :     VarTransformType: "Signal" [Use signal or background events to derive for variable transformation (the transformation is applied on both types of, course)]
--- BDT            :     NbinsMVAPdf: "60" [Number of bins used for the PDFs of classifier outputs]
--- BDT            :     NsmoothMVAPdf: "2" [Number of smoothing iterations for classifier PDFs]
--- BDT            :     VerboseLevel: "Default" [Verbosity level]
--- BDT            :     CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
--- BDT            :     TxtWeightFilesOnly: "True" [If True: write all training results (weights) as text files (False: some are written in ROOT format)]
--- BDT            :     AdaBoostBeta: "1" [Parameter for AdaBoost algorithm]
--- BDT            :     UseRandomisedTrees: "False" [Choose at each node splitting a random set of variables]
--- BDT            :     UseNvars: "4" [Number of variables used if randomised Tree option is chosen]
--- BDT            :     UseWeightedTrees: "True" [Use weighted trees or simple average in classification from the forest]
--- BDT            :     UseYesNoLeaf: "True" [Use Sig or Bkg categories, or the purity=S/(S+B) as classification of the leaf node]
--- BDT            :     NodePurityLimit: "0.5" [In boosting/pruning, nodes with purity > NodePurityLimit are signal; background otherwise.]
--- BDT            :     nEventsMin: "20" [Minimum number of events required in a leaf node (default: max(20, N_train/(Nvar^2)/10) ) ]
--- BDT            :     PruneBeforeBoost: "False" [Flag to prune the tree before applying boosting algorithm]
--- BDT            :     NoNegWeightsInTraining: "False" [Ignore negative event weights in the training process]
--- DataSet        : New variable Transformation DecorrTransform requested and created.
--- TransBase      : Create scatter and profile plots in target-file directory: 
--- TransBase      : TMVAout.root:/InputVariables_DecorrTransform/CorrelationPlots
--- TransBase      : Ranking input variables...
--- DecorrTransform: Ranking result (top variable is best ranked)
--- DecorrTransform: ----------------------------------------------------------------
--- DecorrTransform: Rank : Variable        : Separation
--- DecorrTransform: ----------------------------------------------------------------
--- DecorrTransform:    1 : HT              : 2.083e-01
--- DecorrTransform:    2 : Jet1Pt          : 8.879e-02
--- DecorrTransform:    3 : WTransverseMass : 8.553e-02
--- DecorrTransform:    4 : DeltaRJet1Jet2  : 4.945e-02
--- DecorrTransform: ----------------------------------------------------------------
--- BDT            : Events with negative event weights are ignored during the BDT training (option NoNegWeightsInTraining=0
--- BDT            : <InitEventSample> Internally I use 1339 for Training  and 0 for Validation 
--- Factory        : Booking method: BDTD
--- RuleFit        : Parsing option string: 
--- RuleFit        : "H:!V:RuleFitModule=RFTMVA:Model=ModRuleLinear:MinImp=0.001:RuleMinDist=0.001:NTrees=20:fEventsMin=0.01:fEventsMax=0.5:GDTau=-1.0:GDTauPrec=0.01:GDStep=0.01:GDNSteps=10000:GDErrScale=1.02"
--- RuleFit        : The following options are set:
--- RuleFit        : - By User:
--- RuleFit        :     V: "False" [Verbose mode]
--- RuleFit        :     H: "True" [Print classifier-specific help message]
--- RuleFit        :     GDTau: "-1" [Gradient-directed path: default fit cut-off]
--- RuleFit        :     GDTauPrec: "0.01" [Gradient-directed path: precision of tau]
--- RuleFit        :     GDStep: "0.01" [Gradient-directed path: step size]
--- RuleFit        :     GDNSteps: "10000" [Gradient-directed path: number of steps]
--- RuleFit        :     GDErrScale: "1.02" [Stop scan when error>scale*errmin]
--- RuleFit        :     fEventsMin: "0.01" [Minimum fraction of events in a splittable node]
--- RuleFit        :     fEventsMax: "0.5" [Maximum fraction of events in a splittable node]
--- RuleFit        :     nTrees: "20" [Number of trees in forest.]
--- RuleFit        :     RuleMinDist: "0.001" [Minimum distance between rules]
--- RuleFit        :     MinImp: "0.001" [Minimum rule importance accepted]
--- RuleFit        :     Model: "ModRuleLinear" [Model to be used]
--- RuleFit        :     RuleFitModule: "RFTMVA" [Which RuleFit module to use]
--- RuleFit        : - Default:
--- RuleFit        :     D: "False" [Use-decorrelated-variables flag (depreciated)]
--- RuleFit        :     Normalise: "False" [Normalise input variables]
--- RuleFit        :     VarTransform: "None" [Variable transformation method]
--- RuleFit        :     VarTransformType: "Signal" [Use signal or background events to derive for variable transformation (the transformation is applied on both types of, course)]
--- RuleFit        :     NbinsMVAPdf: "60" [Number of bins used for the PDFs of classifier outputs]
--- RuleFit        :     NsmoothMVAPdf: "2" [Number of smoothing iterations for classifier PDFs]
--- RuleFit        :     VerboseLevel: "Default" [Verbosity level]
--- RuleFit        :     CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
--- RuleFit        :     TxtWeightFilesOnly: "True" [If True: write all training results (weights) as text files (False: some are written in ROOT format)]
--- RuleFit        :     GDPathEveFrac: "0.5" [Fraction of events used for the path search]
--- RuleFit        :     GDValidEveFrac: "0.5" [Fraction of events used for the validation]
--- RuleFit        :     ForestType: "AdaBoost" [Method to use for forest generation]
--- RuleFit        :     RFWorkDir: "./rulefit" [Friedmans RuleFit module: working dir]
--- RuleFit        :     RFNrules: "2000" [Friedmans RuleFit module: maximum number of rules]
--- RuleFit        :     RFNendnodes: "4" [Friedmans RuleFit module: average number of end nodes]
--- Factory        : Booking method: RuleFit
--- Factory        : Training all methods...
--- Factory        : Train method: CutsGA
--- Cuts           : Option for variable: HT: 'ForceSmart' (#: 3)
--- Cuts           : Option for variable: Jet1Pt: 'ForceSmart' (#: 3)
--- Cuts           : Option for variable: DeltaRJet1Jet2: 'ForceSmart' (#: 3)
--- Cuts           : Option for variable: WTransverseMass: 'ForceSmart' (#: 3)
--- Cuts           : 
--- Cuts           : ================================================================
--- Cuts           : H e l p   f o r   c l a s s i f i e r   [ Cuts ] :
--- Cuts           : 
--- Cuts           : --- Short description:
--- Cuts           : 
--- Cuts           : The optimisation of rectangular cuts performed by TMVA maximises 
--- Cuts           : the background rejection at given signal efficiency, and scans 
--- Cuts           : over the full range of the latter quantity. Three optimisation
--- Cuts           : methods are optional: Monte Carlo sampling (MC), a Genetics
--- Cuts           : Algorithm (GA), and Simulated Annealing (SA). GA and SA are
--- Cuts           : expected to perform best.
--- Cuts           : 
--- Cuts           : The difficulty to find the optimal cuts strongly increases with
--- Cuts           : the dimensionality (number of input variables) of the problem.
--- Cuts           : This behavior is due to the non-uniqueness of the solution space.
--- Cuts           : 
--- Cuts           : --- Performance optimisation:
--- Cuts           : 
--- Cuts           : If the dimensionality exceeds, say, 4 input variables, it is 
--- Cuts           : advisable to scrutinize the separation power of the variables,
--- Cuts           : and to remove the weakest ones. If some among the input variables
--- Cuts           : can be described by a single cut (e.g., because signal tends to be
--- Cuts           : larger than background), this can be indicated to MethodCuts via
--- Cuts           : the "Fsmart" options (see option string). Choosing this option
--- Cuts           : reduces the number of requirements for the variable from 2 (min/max)
--- Cuts           : to a single one (TMVA finds out whether it is to be interpreted as
--- Cuts           : min or max).
--- Cuts           : 
--- Cuts           : --- Performance tuning via configuration options:
--- Cuts           : 
--- Cuts           : Monte Carlo sampling:
--- Cuts           : 
--- Cuts           : Apart form the "Fsmart" option for the variables, the only way
--- Cuts           : to improve the MC sampling is to increase the sampling rate. This
--- Cuts           : is done via the configuration option "MC_NRandCuts". The execution
--- Cuts           : time scales linearly with the sampling rate.
--- Cuts           : 
--- Cuts           : Genetic Algorithm:
--- Cuts           : 
--- Cuts           : The algorithm terminates if no significant fitness increase has
--- Cuts           : been achieved within the last "nsteps" steps of the calculation.
--- Cuts           : Wiggles in the ROC curve or constant background rejection of 1
--- Cuts           : indicate that the GA failed to always converge at the true maximum
--- Cuts           : fitness. In such a case, it is recommended to broaden the search 
--- Cuts           : by increasing the population size ("popSize") and to give the GA 
--- Cuts           : more time to find improvements by increasing the number of steps
--- Cuts           : ("nsteps")
--- Cuts           :   -> increase "popSize" (at least >10 * number of variables)
--- Cuts           :   -> increase "nsteps"
--- Cuts           : 
--- Cuts           : Simulated Annealing (SA) algorithm:
--- Cuts           : 
--- Cuts           : "Increasing Adaptive" approach:
--- Cuts           : 
--- Cuts           : The algorithm seeks local minima and explores their neighborhood, while
--- Cuts           : changing the ambient temperature depending on the number of failures
--- Cuts           : in the previous steps. The performance can be improved by increasing
--- Cuts           : the number of iteration steps ("MaxCalls"), or by adjusting the
--- Cuts           : minimal temperature ("MinTemperature"). Manual adjustments of the
--- Cuts           : speed of the temperature increase ("TemperatureScale" and "AdaptiveSpeed")
--- Cuts           : to individual data sets should also help. Summary:
--- Cuts           :   -> increase "MaxCalls"
--- Cuts           :   -> adjust   "MinTemperature"
--- Cuts           :   -> adjust   "TemperatureScale"
--- Cuts           :   -> adjust   "AdaptiveSpeed"
--- Cuts           : 
--- Cuts           : "Decreasing Adaptive" approach:
--- Cuts           : 
--- Cuts           : The algorithm calculates the initial temperature (based on the effect-
--- Cuts           : iveness of large steps) and the multiplier that ensures to reach the
--- Cuts           : minimal temperature with the requested number of iteration steps.
--- Cuts           : The performance can be improved by adjusting the minimal temperature
--- Cuts           :  ("MinTemperature") and by increasing number of steps ("MaxCalls"):
--- Cuts           :   -> increase "MaxCalls"
--- Cuts           :   -> adjust   "MinTemperature"
--- Cuts           :  
--- Cuts           : Other kernels:
--- Cuts           : 
--- Cuts           : Alternative ways of counting the temperature change are implemented. 
--- Cuts           : Each of them starts with the maximum temperature ("MaxTemperature")
--- Cuts           : and descreases while changing the temperature according to a given
--- Cuts           : prescription:
--- Cuts           : CurrentTemperature =
--- Cuts           :   - Sqrt: InitialTemperature / Sqrt(StepNumber+2) * TemperatureScale
--- Cuts           :   - Log:  InitialTemperature / Log(StepNumber+2) * TemperatureScale
--- Cuts           :   - Homo: InitialTemperature / (StepNumber+2) * TemperatureScale
--- Cuts           :   - Sin:  ( Sin( StepNumber / TemperatureScale ) + 1 ) / (StepNumber + 1) * InitialTemperature + Eps
--- Cuts           :   - Geo:  CurrentTemperature * TemperatureScale
--- Cuts           : 
--- Cuts           : Their performance can be improved by adjusting initial temperature
--- Cuts           : ("InitialTemperature"), the number of iteration steps ("MaxCalls"),
--- Cuts           : and the multiplier that scales the termperature descrease
--- Cuts           : ("TemperatureScale")
--- Cuts           :   -> increase "MaxCalls"
--- Cuts           :   -> adjust   "InitialTemperature"
--- Cuts           :   -> adjust   "TemperatureScale"
--- Cuts           :   -> adjust   "KernelTemperature"
--- Cuts           : 
--- Cuts           : <Suppress this message by specifying "!H" in the booking option>
--- Cuts           : ================================================================
--- Cuts           : 
--- CutsFitter_GA  : Parsing option string: 
--- CutsFitter_GA  : "!H:!V:!FitMethod=GA:!EffSel:Steps=30:Cycles=3:PopSize=100:SC_steps=10:SC_rate=5:SC_factor=0.95:!VarProp=FSmart"
--- CutsFitter_GA  : The following options are set:
--- CutsFitter_GA  : - By User:
--- CutsFitter_GA  :     PopSize: "100" [Population size for GA]
--- CutsFitter_GA  :     Steps: "30" [Number of steps for convergence]
--- CutsFitter_GA  :     Cycles: "3" [Independent cycles of GA fitting]
--- CutsFitter_GA  :     SC_steps: "10" [Spread control, steps]
--- CutsFitter_GA  :     SC_rate: "5" [Spread control, rate: factor is changed depending on the rate]
--- CutsFitter_GA  :     SC_factor: "0.95" [Spread control, factor]
--- CutsFitter_GA  : - Default:
--- CutsFitter_GA  :     ConvCrit: "0.001" [Convergence criteria]
--- CutsFitter_GA  :     SaveBestGen: "1" [Saves the best n results from each generation; these are included in the last cycle]
--- CutsFitter_GA  :     SaveBestCycle: "10" [Saves the best n results from each cycle; these are included in the last cycle]
--- CutsFitter_GA  :     Trim: "False" [Trim the population to PopSize after assessing the fitness of each individual]
--- CutsFitter_GA  :     Seed: "100" [Set seed of random generator (0 gives random seeds)]
--- CutsFitter_GA  : <GeneticFitter> Optimisation, please be patient ... (note: inaccurate progress timing for GA)
--- CutsFitter_GA  : Elapsed time: 2.99 sec                            
--- Cuts           : -----------------------------------------------------
--- Cuts           : Cut values for requested signal efficiency: 0.1
--- Cuts           : Corresponding background efficiency       : 0.0108147)
--- Cuts           : -----------------------------------------------------
--- Cuts           : Cut[ 0]:     -1e+30 <              [HT] <=    1642.11
--- Cuts           : Cut[ 1]:     -1e+30 <          [Jet1Pt] <=    58.4718
--- Cuts           : Cut[ 2]:     2.2199 <  [DeltaRJet1Jet2] <=      1e+30
--- Cuts           : Cut[ 3]:     -1e+30 < [WTransverseMass] <=     282.28
--- Cuts           : -----------------------------------------------------
--- Cuts           : -----------------------------------------------------
--- Cuts           : Cut values for requested signal efficiency: 0.2
--- Cuts           : Corresponding background efficiency       : 0.0197783)
--- Cuts           : -----------------------------------------------------
--- Cuts           : Cut[ 0]:     -1e+30 <              [HT] <=    243.069
--- Cuts           : Cut[ 1]:     -1e+30 <          [Jet1Pt] <=    565.519
--- Cuts           : Cut[ 2]:   0.892931 <  [DeltaRJet1Jet2] <=      1e+30
--- Cuts           : Cut[ 3]:     -1e+30 < [WTransverseMass] <=     288.73
--- Cuts           : -----------------------------------------------------
--- Cuts           : -----------------------------------------------------
--- Cuts           : Cut values for requested signal efficiency: 0.3
--- Cuts           : Corresponding background efficiency       : 0.0357081)
--- Cuts           : -----------------------------------------------------
--- Cuts           : Cut[ 0]:     -1e+30 <              [HT] <=    258.284
--- Cuts           : Cut[ 1]:     -1e+30 <          [Jet1Pt] <=     331.66
--- Cuts           : Cut[ 2]:    1.03693 <  [DeltaRJet1Jet2] <=      1e+30
--- Cuts           : Cut[ 3]:     -1e+30 < [WTransverseMass] <=    343.833
--- Cuts           : -----------------------------------------------------
--- Cuts           : -----------------------------------------------------
--- Cuts           : Cut values for requested signal efficiency: 0.4
--- Cuts           : Corresponding background efficiency       : 0.0574593)
--- Cuts           : -----------------------------------------------------
--- Cuts           : Cut[ 0]:     -1e+30 <              [HT] <=    280.778
--- Cuts           : Cut[ 1]:     -1e+30 <          [Jet1Pt] <=    459.721
--- Cuts           : Cut[ 2]:    1.03693 <  [DeltaRJet1Jet2] <=      1e+30
--- Cuts           : Cut[ 3]:     -1e+30 < [WTransverseMass] <=    98.7328
--- Cuts           : -----------------------------------------------------
--- Cuts           : -----------------------------------------------------
--- Cuts           : Cut values for requested signal efficiency: 0.5
--- Cuts           : Corresponding background efficiency       : 0.100694)
--- Cuts           : -----------------------------------------------------
--- Cuts           : Cut[ 0]:     -1e+30 <              [HT] <=    295.411
--- Cuts           : Cut[ 1]:     -1e+30 <          [Jet1Pt] <=    1068.21
--- Cuts           : Cut[ 2]:   0.787902 <  [DeltaRJet1Jet2] <=      1e+30
--- Cuts           : Cut[ 3]:     -1e+30 < [WTransverseMass] <=    117.266
--- Cuts           : -----------------------------------------------------
--- Cuts           : -----------------------------------------------------
--- Cuts           : Cut values for requested signal efficiency: 0.6
--- Cuts           : Corresponding background efficiency       : 0.177131)
--- Cuts           : -----------------------------------------------------
--- Cuts           : Cut[ 0]:     -1e+30 <              [HT] <=    321.171
--- Cuts           : Cut[ 1]:     -1e+30 <          [Jet1Pt] <=    460.868
--- Cuts           : Cut[ 2]:    1.03693 <  [DeltaRJet1Jet2] <=      1e+30
--- Cuts           : Cut[ 3]:     -1e+30 < [WTransverseMass] <=    157.666
--- Cuts           : -----------------------------------------------------
--- Cuts           : -----------------------------------------------------
--- Cuts           : Cut values for requested signal efficiency: 0.7
--- Cuts           : Corresponding background efficiency       : 0.304926)
--- Cuts           : -----------------------------------------------------
--- Cuts           : Cut[ 0]:     -1e+30 <              [HT] <=    356.563
--- Cuts           : Cut[ 1]:     -1e+30 <          [Jet1Pt] <=    1036.81
--- Cuts           : Cut[ 2]:   0.620059 <  [DeltaRJet1Jet2] <=      1e+30
--- Cuts           : Cut[ 3]:     -1e+30 < [WTransverseMass] <=    280.334
--- Cuts           : -----------------------------------------------------
--- Cuts           : -----------------------------------------------------
--- Cuts           : Cut values for requested signal efficiency: 0.8
--- Cuts           : Corresponding background efficiency       : 0.435049)
--- Cuts           : -----------------------------------------------------
--- Cuts           : Cut[ 0]:     -1e+30 <              [HT] <=    2316.92
--- Cuts           : Cut[ 1]:     -1e+30 <          [Jet1Pt] <=    122.678
--- Cuts           : Cut[ 2]:    1.02789 <  [DeltaRJet1Jet2] <=      1e+30
--- Cuts           : Cut[ 3]:     -1e+30 < [WTransverseMass] <=    115.868
--- Cuts           : -----------------------------------------------------
--- Cuts           : -----------------------------------------------------
--- Cuts           : Cut values for requested signal efficiency: 0.9
--- Cuts           : Corresponding background efficiency       : 0.671083)
--- Cuts           : -----------------------------------------------------
--- Cuts           : Cut[ 0]:     -1e+30 <              [HT] <=    515.313
--- Cuts           : Cut[ 1]:     -1e+30 <          [Jet1Pt] <=    620.115
--- Cuts           : Cut[ 2]:   0.887084 <  [DeltaRJet1Jet2] <=      1e+30
--- Cuts           : Cut[ 3]:     -1e+30 < [WTransverseMass] <=    120.256
--- Cuts           : -----------------------------------------------------
--- Cuts           : Creating weight file in text format: weights/TMVAnalysis_CutsGA.weights.txt
--- Cuts           : Creating standalone response class : weights/TMVAnalysis_CutsGA.class.C
--- Cuts           : write monitoring histograms to file: TMVAout.root:/Method_Cuts/CutsGA
--- Factory        : Train method: Likelihood
--- Likelihood     : Filling reference histograms
--- PDF            : Validation result for PDF "HT signal training": 
--- PDF            :     chi2/ndof(!=0) = 39.4/7 = 5.63 (Prob = 0.00)
--- PDF            :     #bins-found(#expected-bins) deviating > [1,2,3,6] sigmas: [3(2),3(0),2(0),0(0)]
--- PDF            : Validation result for PDF "Jet1Pt signal training": 
--- PDF            :     chi2/ndof(!=0) = 394.1/5 = 78.81 (Prob = 0.00)
--- PDF            :     #bins-found(#expected-bins) deviating > [1,2,3,6] sigmas: [4(1),4(0),3(0),3(0)]
--- PDF            : Validation result for PDF "DeltaRJet1Jet2 signal training": 
--- PDF            :     chi2/ndof(!=0) = 121.7/9 = 13.52 (Prob = 0.00)
--- PDF            :     #bins-found(#expected-bins) deviating > [1,2,3,6] sigmas: [7(2),4(0),3(0),2(0)]
--- PDF            : Validation result for PDF "WTransverseMass signal training": 
--- PDF            :     chi2/ndof(!=0) = 650.4/5 = 130.09 (Prob = 0.00)
--- PDF            :     #bins-found(#expected-bins) deviating > [1,2,3,6] sigmas: [4(1),3(0),3(0),1(0)]
--- PDF            : Validation result for PDF "HT background training": 
--- PDF            :     chi2/ndof(!=0) = 200.5/8 = 25.06 (Prob = 0.00)
--- PDF            :     #bins-found(#expected-bins) deviating > [1,2,3,6] sigmas: [6(2),4(0),4(0),3(0)]
--- PDF            : Validation result for PDF "Jet1Pt background training": 
--- PDF            :     chi2/ndof(!=0) = 33.0/7 = 4.71 (Prob = 0.00)
--- PDF            :     #bins-found(#expected-bins) deviating > [1,2,3,6] sigmas: [4(2),3(0),2(0),0(0)]
--- PDF            : Validation result for PDF "DeltaRJet1Jet2 background training": 
--- PDF            :     chi2/ndof(!=0) = 164.4/9 = 18.27 (Prob = 0.00)
--- PDF            :     #bins-found(#expected-bins) deviating > [1,2,3,6] sigmas: [6(2),6(0),5(0),2(0)]
--- PDF            : Validation result for PDF "WTransverseMass background training": 
--- PDF            :     chi2/ndof(!=0) = 295.0/8 = 36.87 (Prob = 0.00)
--- PDF            :     #bins-found(#expected-bins) deviating > [1,2,3,6] sigmas: [6(2),4(0),3(0),2(0)]
--- Likelihood     : Creating weight file in text format: weights/TMVAnalysis_Likelihood.weights.txt
--- Likelihood     : Creating standalone response class : weights/TMVAnalysis_Likelihood.class.C
--- Likelihood     : Write monitoring histograms to file: TMVAout.root:/Method_Likelihood/Likelihood
--- Factory        : Train method: LikelihoodPCA
--- Likelihood     : Filling reference histograms
--- PDF            : Validation result for PDF "HT signal training": 
--- PDF            :     chi2/ndof(!=0) = 60.4/7 = 8.62 (Prob = 0.00)
--- PDF            :     #bins-found(#expected-bins) deviating > [1,2,3,6] sigmas: [4(2),3(0),3(0),0(0)]
--- PDF            : Validation result for PDF "Jet1Pt signal training": 
--- PDF            :     chi2/ndof(!=0) = 454.2/8 = 56.77 (Prob = 0.00)
--- PDF            :     #bins-found(#expected-bins) deviating > [1,2,3,6] sigmas: [5(2),4(0),3(0),3(0)]
--- PDF            : Validation result for PDF "DeltaRJet1Jet2 signal training": 
--- PDF            :     chi2/ndof(!=0) = 800.4/5 = 160.09 (Prob = 0.00)
--- PDF            :     #bins-found(#expected-bins) deviating > [1,2,3,6] sigmas: [5(1),5(0),4(0),4(0)]
--- PDF            : Validation result for PDF "WTransverseMass signal training": 
--- PDF            :     chi2/ndof(!=0) = 95.2/10 = 9.52 (Prob = 0.00)
--- PDF            :     #bins-found(#expected-bins) deviating > [1,2,3,6] sigmas: [7(3),7(0),5(0),0(0)]
--- PDF            : Validation result for PDF "HT background training": 
--- PDF            :     chi2/ndof(!=0) = 10274.0/3 = 3424.65 (Prob = 0.00)
--- PDF            :     #bins-found(#expected-bins) deviating > [1,2,3,6] sigmas: [2(0),2(0),2(0),2(0)]
--- PDF            : Validation result for PDF "Jet1Pt background training": 
--- PDF            :     chi2/ndof(!=0) = 227.4/8 = 28.43 (Prob = 0.00)
--- PDF            :     #bins-found(#expected-bins) deviating > [1,2,3,6] sigmas: [5(2),4(0),4(0),3(0)]
--- PDF            : Validation result for PDF "DeltaRJet1Jet2 background training": 
--- PDF            :     chi2/ndof(!=0) = 297.4/8 = 37.17 (Prob = 0.00)
--- PDF            :     #bins-found(#expected-bins) deviating > [1,2,3,6] sigmas: [5(2),4(0),4(0),4(0)]
--- PDF            : Validation result for PDF "WTransverseMass background training": 
--- PDF            :     chi2/ndof(!=0) = 97.3/8 = 12.16 (Prob = 0.00)
--- PDF            :     #bins-found(#expected-bins) deviating > [1,2,3,6] sigmas: [7(2),5(0),5(0),0(0)]
--- Likelihood     : Creating weight file in text format: weights/TMVAnalysis_LikelihoodPCA.weights.txt
--- Likelihood     : Creating standalone response class : weights/TMVAnalysis_LikelihoodPCA.class.C
--- Likelihood     : Write monitoring histograms to file: TMVAout.root:/Method_Likelihood/LikelihoodPCA
--- Factory        : Train method: PDERS
--- PDERS          : Creating weight file in text format: weights/TMVAnalysis_PDERS.weights.txt
--- PDERS          : Creating standalone response class : weights/TMVAnalysis_PDERS.class.C
--- PDERS          : No monitoring histograms written
--- Factory        : Train method: KNN
--- KNN            : <Train> start...
--- KNN            : Reading 1339 events
--- KNN            : Number of signal events 515.581
--- KNN            : Number of background events 824.079
--- KNN            : Creating kd-tree with 1339 events
--- ModulekNN      : Computing scale factor for 1d distributions: (ifrac, bottom, top) = (80%, 10%, 90%)
--- ModulekNN      : Optimizing tree for 4 variables with 1339 values
--- ModulekNN      : <Fill> Class 1 has      516 events
--- ModulekNN      : <Fill> Class 2 has      823 events
--- KNN            : Creating weight file in text format: weights/TMVAnalysis_KNN.weights.txt
--- KNN            : Starting WriteWeightsToStream(ostream& os) function...
--- KNN            : Creating standalone response class : weights/TMVAnalysis_KNN.class.C
--- KNN            : No monitoring histograms written
--- Factory        : Train method: HMatrix
--- HMatrix        : Creating weight file in text format: weights/TMVAnalysis_HMatrix.weights.txt
--- HMatrix        : Creating standalone response class : weights/TMVAnalysis_HMatrix.class.C
--- HMatrix        : No monitoring histograms written
--- Factory        : Train method: Fisher
--- Fisher         : 
--- Fisher         : ================================================================
--- Fisher         : H e l p   f o r   c l a s s i f i e r   [ Fisher ] :
--- Fisher         : 
--- Fisher         : --- Short description:
--- Fisher         : 
--- Fisher         : Fisher discriminants select events by distinguishing the mean 
--- Fisher         : values of the signal and background distributions in a trans- 
--- Fisher         : formed variable space where linear correlations are removed.
--- Fisher         : 
--- Fisher         :    (More precisely: the "linear discriminator" determines
--- Fisher         :     an axis in the (correlated) hyperspace of the input 
--- Fisher         :     variables such that, when projecting the output classes 
--- Fisher         :     (signal and background) upon this axis, they are pushed 
--- Fisher         :     as far as possible away from each other, while events
--- Fisher         :     of a same class are confined in a close vicinity. The  
--- Fisher         :     linearity property of this classifier is reflected in the 
--- Fisher         :     metric with which "far apart" and "close vicinity" are 
--- Fisher         :     determined: the covariance matrix of the discriminating
--- Fisher         :     variable space.)
--- Fisher         : 
--- Fisher         : --- Performance optimisation:
--- Fisher         : 
--- Fisher         : Optimal performance for Fisher discriminants is obtained for 
--- Fisher         : linearly correlated Gaussian-distributed variables. Any deviation
--- Fisher         : from this ideal reduces the achievable separation power. In 
--- Fisher         : particular, no discrimination at all is achieved for a variable
--- Fisher         : that has the same sample mean for signal and background, even if 
--- Fisher         : the shapes of the distributions are very different. Thus, Fisher 
--- Fisher         : discriminants often benefit from suitable transformations of the 
--- Fisher         : input variables. For example, if a variable x in [-1,1] has a 
--- Fisher         : a parabolic signal distributions, and a uniform background
--- Fisher         : distributions, their mean value is zero in both cases, leading 
--- Fisher         : to no separation. The simple transformation x -> |x| renders this 
--- Fisher         : variable powerful for the use in a Fisher discriminant.
--- Fisher         : 
--- Fisher         : --- Performance tuning via configuration options:
--- Fisher         : 
--- Fisher         : None
--- Fisher         : 
--- Fisher         : <Suppress this message by specifying "!H" in the booking option>
--- Fisher         : ================================================================
--- Fisher         : 
--- Fisher         : Results for Fisher coefficients:
--- Fisher         : ---------------------------------
--- Fisher         :        Variable:     Coefficient:
--- Fisher         : ---------------------------------
--- Fisher         :              HT:          -0.002
--- Fisher         :          Jet1Pt:          +0.001
--- Fisher         :  DeltaRJet1Jet2:          +0.104
--- Fisher         : WTransverseMass:          +0.001
--- Fisher         :        (offset):          +0.303
--- Fisher         : ---------------------------------
--- Fisher         : <CreateMVAPdfs> Using 50 bins and smooth 1 times
--- PDF            : Validation result for PDF "Fisher_tr_S": 
--- PDF            :     chi2/ndof(!=0) = 3.5/25 = 0.14 (Prob = 1.00)
--- PDF            :     #bins-found(#expected-bins) deviating > [1,2,3,6] sigmas: [0(7),0(1),0(0),0(0)]
--- PDF            : Validation result for PDF "Fisher_tr_B": 
--- PDF            :     chi2/ndof(!=0) = 7.0/32 = 0.22 (Prob = 1.00)
--- PDF            :     #bins-found(#expected-bins) deviating > [1,2,3,6] sigmas: [2(10),0(1),0(0),0(0)]
--- Fisher         : <CreateMVAPdfs> Separation from histogram (PDF): 0.266 (0.256)
--- Fisher         : Creating weight file in text format: weights/TMVAnalysis_Fisher.weights.txt
--- Fisher         : Creating standalone response class : weights/TMVAnalysis_Fisher.class.C
--- Fisher         : No monitoring histograms written
--- Factory        : Train method: FDA_MT
--- FDA            : 
--- FDA            : ================================================================
--- FDA            : H e l p   f o r   c l a s s i f i e r   [ FDA ] :
--- FDA            : 
--- FDA            : --- Short description:
--- FDA            : 
--- FDA            : The function discriminant analysis (FDA) is a classifier suitable 
--- FDA            : to solve linear or simple nonlinear discrimination problems.
--- FDA            : 
--- FDA            : The user provides the desired function with adjustable parameters
--- FDA            : via the configuration option string, and FDA fits the parameters to
--- FDA            : it, requiring the signal (background) function value to be as close
--- FDA            : as possible to 1 (0). Its advantage over the more involved and
--- FDA            : automatic nonlinear discriminators is the simplicity and transparency 
--- FDA            : of the discrimination expression. A shortcoming is that FDA will
--- FDA            : underperform for involved problems with complicated, phase space
--- FDA            : dependent nonlinear correlations.
--- FDA            : 
--- FDA            : Please consult the Users Guide for the format of the formula string
--- FDA            : and the allowed parameter ranges:
--- FDA            : http://tmva.sourceforge.net/docu/TMVAUsersGuide.pdf
--- FDA            : 
--- FDA            : --- Performance optimisation:
--- FDA            : 
--- FDA            : The FDA performance depends on the complexity and fidelity of the
--- FDA            : user-defined discriminator function. As a general rule, it should
--- FDA            : be able to reproduce the discrimination power of any linear
--- FDA            : discriminant analysis. To reach into the nonlinear domain, it is
--- FDA            : useful to inspect the correlation profiles of the input variables,
--- FDA            : and add quadratic and higher polynomial terms between variables as
--- FDA            : necessary. Comparison with more involved nonlinear classifiers can
--- FDA            : be used as a guide.
--- FDA            : 
--- FDA            : --- Performance tuning via configuration options:
--- FDA            : 
--- FDA            : Depending on the function used, the choice of "FitMethod" is
--- FDA            : crucial for getting valuable solutions with FDA. As a guideline it
--- FDA            : is recommended to start with "FitMethod=MINUIT". When more complex
--- FDA            : functions are used where MINUIT does not converge to reasonable
--- FDA            : results, the user should switch to non-gradient FitMethods such
--- FDA            : as GeneticAlgorithm (GA) or Monte Carlo (MC). It might prove to be
--- FDA            : useful to combine GA (or MC) with MINUIT by setting the option
--- FDA            : "Converger=MINUIT". GA (MC) will then set the starting parameters
--- FDA            : for MINUIT such that the basic quality of GA (MC) of finding global
--- FDA            : minima is combined with the efficacy of MINUIT of finding local
--- FDA            : minima.
--- FDA            : 
--- FDA            : <Suppress this message by specifying "!H" in the booking option>
--- FDA            : ================================================================
--- FDA            : 
--- FDA            : Results for parameter fit using "MINUIT" fitter:
--- FDA            : -----------------------
--- FDA            : Parameter:  Fit result:
--- FDA            : -----------------------
--- FDA            :    Par(0):    0.694601
--- FDA            :    Par(1):  -0.0011554
--- FDA            :    Par(2): 0.000436835
--- FDA            :    Par(3):   0.0637497
--- FDA            :    Par(4): 0.000475737
--- FDA            : -----------------------
--- FDA            : Discriminator expression: "(0)+(1)*x0+(2)*x1+(3)*x2+(4)*x3"
--- FDA            : Value of estimator at minimum: 0.43485
--- FDA            : Creating weight file in text format: weights/TMVAnalysis_FDA_MT.weights.txt
--- FDA            : Creating standalone response class : weights/TMVAnalysis_FDA_MT.class.C
--- FDA            : No monitoring histograms written
--- Factory        : Train method: MLP
--- MLP            : 
--- MLP            : ================================================================
--- MLP            : H e l p   f o r   c l a s s i f i e r   [ MLP ] :
--- MLP            : 
--- MLP            : --- Short description:
--- MLP            : 
--- MLP            : The MLP artificial neural network (ANN) is a traditional feed-
--- MLP            : forward multilayer perceptron impementation. The MLP has a user-
--- MLP            : defined hidden layer architecture, while the number of input (output)
--- MLP            : nodes is determined by the input variables (output classes, i.e., 
--- MLP            : signal and one background). 
--- MLP            : 
--- MLP            : --- Performance optimisation:
--- MLP            : 
--- MLP            : Neural networks are stable and performing for a large variety of 
--- MLP            : linear and non-linear classification problems. However, in contrast
--- MLP            : to (e.g.) boosted decision trees, the user is advised to reduce the 
--- MLP            : number of input variables that have only little discrimination power. 
--- MLP            : 
--- MLP            : In the tests we have carried out so far, the MLP and ROOT networks
--- MLP            : (TMlpANN, interfaced via TMVA) performed equally well, with however
--- MLP            : a clear speed advantage for the MLP. The Clermont-Ferrand neural 
--- MLP            : net (CFMlpANN) exhibited worse classification performance in these
--- MLP            : tests, which is partly due to the slow convergence of its training
--- MLP            : (at least 10k training cycles are required to achieve approximately
--- MLP            : competitive results).
--- MLP            : 
--- MLP            : Overtraining: only the TMlpANN performs an explicit separation of the
--- MLP            : full training sample into independent training and validation samples.
--- MLP            : We have found that in most high-energy physics applications the 
--- MLP            : avaliable degrees of freedom (training events) are sufficient to 
--- MLP            : constrain the weights of the relatively simple architectures required
--- MLP            : to achieve good performance. Hence no overtraining should occur, and 
--- MLP            : the use of validation samples would only reduce the available training
--- MLP            : information. However, if the perrormance on the training sample is 
--- MLP            : found to be significantly better than the one found with the inde-
--- MLP            : pendent test sample, caution is needed. The results for these samples 
--- MLP            : are printed to standard output at the end of each training job.
--- MLP            : 
--- MLP            : --- Performance tuning via configuration options:
--- MLP            : 
--- MLP            : The hidden layer architecture for all ANNs is defined by the option
--- MLP            : "HiddenLayers=N+1,N,...", where here the first hidden layer has N+1
--- MLP            : neurons and the second N neurons (and so on), and where N is the number  
--- MLP            : of input variables. Excessive numbers of hidden layers should be avoided,
--- MLP            : in favour of more neurons in the first hidden layer.
--- MLP            : 
--- MLP            : The number of cycles should be above 500. As said, if the number of
--- MLP            : adjustable weights is small compared to the training sample size,
--- MLP            : using a large number of training samples should not lead to overtraining.
--- MLP            : 
--- MLP            : <Suppress this message by specifying "!H" in the booking option>
--- MLP            : ================================================================
--- MLP            : 
--- MLP            : Training Network
--- MLP            : Train: elapsed time: 5.19 sec                      
--- MLP            : Creating weight file in text format: weights/TMVAnalysis_MLP.weights.txt
--- MLP            : Creating standalone response class : weights/TMVAnalysis_MLP.class.C
--- MLP            : Write special histos to file: TMVAout.root:/Method_MLP/MLP
--- Factory        : Train method: SVM_Gauss
--- SVM            : Sorry, no computing time forecast available for SVM, please wait ...
--- SVM            : <Train> elapsed time: 0.0389 sec                                          
--- SVM            : <Train> number of iterations: 5
--- SVM            : Results:
--- SVM            : - number of support vectors: 9 (0%)
--- SVM            : - b: 0.356742
--- SVM            : All support vectors stored properly
--- SVM            : Creating weight file in text format: weights/TMVAnalysis_SVM_Gauss.weights.txt
--- SVM            : Creating standalone response class : weights/TMVAnalysis_SVM_Gauss.class.C
--- SVM            : No monitoring histograms written
--- Factory        : Train method: BDT
--- BDT            : Training 400 Decision Trees ... patience please
--- BDT            : <Train> elapsed time: 29.9 sec                              
--- BDT            : <Train> average number of nodes before/after pruning : 100 / 32
--- BDT            : Creating weight file in text format: weights/TMVAnalysis_BDT.weights.txt
--- BDT            : Creating standalone response class : weights/TMVAnalysis_BDT.class.C
--- BDT            : Write monitoring histograms to file: TMVAout.root:/Method_BDT/BDT
--- Factory        : Train method: BDTD
--- BDT            : Training 400 Decision Trees ... patience please
--- BDT            : <Train> elapsed time: 30.5 sec                              
--- BDT            : <Train> average number of nodes before/after pruning : 100 / 37
--- BDT            : Creating weight file in text format: weights/TMVAnalysis_BDTD.weights.txt
--- BDT            : Creating standalone response class : weights/TMVAnalysis_BDTD.class.C
--- BDT            : Write monitoring histograms to file: TMVAout.root:/Method_BDT/BDTD
--- Factory        : Train method: RuleFit
--- RuleFit        : 
--- RuleFit        : ================================================================
--- RuleFit        : H e l p   f o r   c l a s s i f i e r   [ RuleFit ] :
--- RuleFit        : 
--- RuleFit        : --- Short description:
--- RuleFit        : 
--- RuleFit        : This method uses a collection of so called rules to create a
--- RuleFit        : discriminating scoring function. Each rule consists of a series
--- RuleFit        : of cuts in parameter space. The ensemble of rules are created
--- RuleFit        : from a forest of decision trees, trained using the training data.
--- RuleFit        : Each node (apart from the root) corresponds to one rule.
--- RuleFit        : The scoring function is then obtained by linearly combining
--- RuleFit        : the rules. A fitting procedure is applied to find the optimum
--- RuleFit        : set of coefficients. The goal is to find a model with few rules
--- RuleFit        : but with a strong discriminating power.
--- RuleFit        : 
--- RuleFit        : --- Performance optimisation:
--- RuleFit        : 
--- RuleFit        : There are two important considerations to make when optimising:
--- RuleFit        : 
--- RuleFit        :   1. Topology of the decision tree forest
--- RuleFit        :   2. Fitting of the coefficients
--- RuleFit        : 
--- RuleFit        : The maximum complexity of the rules is defined by the size of
--- RuleFit        : the trees. Large trees will yield many complex rules and capture
--- RuleFit        : higher order correlations. On the other hand, small trees will
--- RuleFit        : lead to a smaller ensemble with simple rules, only capable of
--- RuleFit        : modeling simple structures.
--- RuleFit        : Several parameters exists for controlling the complexity of the
--- RuleFit        : rule ensemble.
--- RuleFit        : 
--- RuleFit        : The fitting procedure searches for a minimum using a gradient
--- RuleFit        : directed path. Apart from step size and number of steps, the
--- RuleFit        : evolution of the path is defined by a cut-off parameter, tau.
--- RuleFit        : This parameter is unknown and depends on the training data.
--- RuleFit        : A large value will tend to give large weights to a few rules.
--- RuleFit        : Similarily, a small value will lead to a large set of rules
--- RuleFit        : with similar weights.
--- RuleFit        : 
--- RuleFit        : A final point is the model used; rules and/or linear terms.
--- RuleFit        : For a given training sample, the result may improve by adding
--- RuleFit        : linear terms. If best performance is optained using only linear
--- RuleFit        : terms, it is very likely that the Fisher discriminant would be
--- RuleFit        : a better choice. Ideally the fitting procedure should be able to
--- RuleFit        : make this choice by giving appropriate weights for either terms.
--- RuleFit        : 
--- RuleFit        : --- Performance tuning via configuration options:
--- RuleFit        : 
--- RuleFit        : I.  TUNING OF RULE ENSEMBLE:
--- RuleFit        : 
--- RuleFit        :    ForestType  : Recomended is to use the default "AdaBoost".
--- RuleFit        :    nTrees      : More trees leads to more rules but also slow
--- RuleFit        :                  performance. With too few trees the risk is
--- RuleFit        :                  that the rule ensemble becomes too simple.
--- RuleFit        :    fEventsMin  
--- RuleFit        :    fEventsMax  : With a lower min, more large trees will be generated
--- RuleFit        :                  leading to more complex rules.
--- RuleFit        :                  With a higher max, more small trees will be
--- RuleFit        :                  generated leading to more simple rules.
--- RuleFit        :                  By changing this range, the average complexity
--- RuleFit        :                  of the rule ensemble can be controlled.
--- RuleFit        :    RuleMinDist : By increasing the minimum distance between
--- RuleFit        :                  rules, fewer and more diverse rules will remain.
--- RuleFit        :                  Initially it is a good idea to keep this small
--- RuleFit        :                  or zero and let the fitting do the selection of
--- RuleFit        :                  rules. In order to reduce the ensemble size,
--- RuleFit        :                  the value can then be increased.
--- RuleFit        : 
--- RuleFit        : II. TUNING OF THE FITTING:
--- RuleFit        : 
--- RuleFit        :    GDPathEveFrac : fraction of events in path evaluation
--- RuleFit        :                  Increasing this fraction will improve the path
--- RuleFit        :                  finding. However, a too high value will give few
--- RuleFit        :                  unique events available for error estimation.
--- RuleFit        :                  It is recomended to usethe default = 0.5.
--- RuleFit        :    GDTau         : cutoff parameter tau
--- RuleFit        :                  By default this value is set to -1.0.
--- RuleFit        :                  This means that the cut off parameter is
--- RuleFit        :                  automatically estimated. In most cases
--- RuleFit        :                  this should be fine. However, you may want
--- RuleFit        :                  to fix this value if you already know it
--- RuleFit        :                  and want to reduce on training time.
--- RuleFit        :    GDTauPrec     : precision of estimated tau
--- RuleFit        :                  Increase this precision to find a more
--- RuleFit        :                  optimum cut-off parameter.
--- RuleFit        :    GDNStep       : number of steps in path search
--- RuleFit        :                  If the number of steps is too small, then
--- RuleFit        :                  the program will give a warning message.
--- RuleFit        : 
--- RuleFit        : III. WARNING MESSAGES
--- RuleFit        : 
--- RuleFit        : Risk(i+1)>=Risk(i) in path
--- RuleFit        : Chaotic behaviour of risk evolution.
--- RuleFit        :                  The error rate was still decreasing at the end
--- RuleFit        :                  By construction the Risk should always decrease.
--- RuleFit        :                  However, if the training sample is too small or
--- RuleFit        :                  the model is overtrained, such warnings can
--- RuleFit        :                  occur.
--- RuleFit        :                  The warnings can safely be ignored if only a
--- RuleFit        :                  few (<3) occur. If more warnings are generated,
--- RuleFit        :                  the fitting fails.
--- RuleFit        :                  A remedy may be to increase the value
--- RuleFit        :                  GDValidEveFrac to 1.0 (or a larger value).
--- RuleFit        :                  In addition, if GDPathEveFrac is too high
--- RuleFit        :                  the same warnings may occur since the events
--- RuleFit        :                  used for error estimation are also used for
--- RuleFit        :                  path estimation.
--- RuleFit        :                  Another possibility is to modify the model - 
--- RuleFit        :                  See above on tuning the rule ensemble.
--- RuleFit        : 
--- RuleFit        : The error rate was still decreasing at the end of the path
--- RuleFit        :                  Too few steps in path! Increase GDNSteps.
--- RuleFit        : 
--- RuleFit        : Reached minimum early in the search
--- RuleFit        :                  Minimum was found early in the fitting. This
--- RuleFit        :                  may indicate that the used step size GDStep.
--- RuleFit        :                  was too large. Reduce it and rerun.
--- RuleFit        :                  If the results still are not OK, modify the
--- RuleFit        :                  model either by modifying the rule ensemble
--- RuleFit        :                  or add/remove linear terms
--- RuleFit        : 
--- RuleFit        : <Suppress this message by specifying "!H" in the booking option>
--- RuleFit        : ================================================================
--- RuleFit        : 
--- RuleFit        : -------------------RULE ENSEMBLE SUMMARY------------------------
--- RuleFit        : Tree training method               : AdaBoost
--- RuleFit        : Number of events per tree          : 1339
--- RuleFit        : Number of trees                    : 20
--- RuleFit        : Number of generated rules          : 242
--- RuleFit        : Idem, after cleanup                : 177
--- RuleFit        : Average number of cuts per rule    :     3.59
--- RuleFit        : Spread in number of cuts per rules :     1.84
--- RuleFit        : ----------------------------------------------------------------
--- RuleFit        : 
--- RuleFit        : GD path scan - the scan stops when the max num. of steps is reached or a min is found
--- RuleFit        : Estimating the cutoff parameter tau. The estimated time is a pessimistic maximum.
--- RuleFit        : Best path found with tau = 0.9293 after 3.16 sec      
--- RuleFit        : Fitting model...
--- RuleFit        : Minimization elapsed time : 1.82 sec                        
--- RuleFit        : ----------------------------------------------------------------
--- RuleFit        : Found minimum at step 10000 with error = 1.49845
--- RuleFit        : Reason for ending loop: end of loop reached
--- RuleFit        : ----------------------------------------------------------------
--- <WARNING> RuleFit        : The error rate was still decreasing at the end of the path
--- <WARNING> RuleFit        : Increase number of steps (GDNSteps).
--- RuleFit        : Elapsed time: 5.46 sec
--- RuleFit        : Removed 175 out of a total of 177 rules with importance < 0.001
--- RuleFit        : 
--- RuleFit        : ================================================================
--- RuleFit        :                           M o d e l                             
--- RuleFit        : ================================================================
--- RuleFit        : Offset (a0) = 0.00921949
--- RuleFit        : ------------------------------------
--- RuleFit        : Linear model (weights unnormalised)
--- RuleFit        : ------------------------------------
--- RuleFit        :        Variable :     Weights : Importance
--- RuleFit        : ------------------------------------
--- RuleFit        :              HT :  -6.727e-03 :  1.000
--- RuleFit        :          Jet1Pt-> importance below threshhold =  0.000
--- RuleFit        :  DeltaRJet1Jet2 :  -1.266e-02 :  0.008
--- RuleFit        : WTransverseMass-> importance below threshhold =  0.000
--- RuleFit        : ------------------------------------
--- RuleFit        : Number of rules = 2
--- RuleFit        : Printing the first 2 rules, ordered in importance.
--- RuleFit        : Rule    1 : Importance  = 0.0028
--- RuleFit        :             Cut  1 :        304 < HT             
--- RuleFit        : Rule    2 : Importance  = 0.0028
--- RuleFit        :             Cut  1 :        310 < HT             
--- RuleFit        : All rules printed
--- RuleFit        : ================================================================
--- RuleFit        : 
--- RuleFit        : Creating weight file in text format: weights/TMVAnalysis_RuleFit.weights.txt
--- RuleFit        : Creating standalone response class : weights/TMVAnalysis_RuleFit.class.C
--- RuleFit        : write monitoring ntuple to file: TMVAout.root:/Method_RuleFit/RuleFit
--- Factory        : 
--- Factory        : Begin ranking of input variables...
--- Factory        : No variable ranking supplied by classifier: CutsGA
--- Likelihood     : Ranking result (top variable is best ranked)
--- Likelihood     : ----------------------------------------------------------------
--- Likelihood     : Rank : Variable        : Delta Separation
--- Likelihood     : ----------------------------------------------------------------
--- Likelihood     :    1 : HT              : 1.573e-01
--- Likelihood     :    2 : Jet1Pt          : 2.590e-03
--- Likelihood     :    3 : DeltaRJet1Jet2  : -1.601e-02
--- Likelihood     :    4 : WTransverseMass : -5.441e-02
--- Likelihood     : ----------------------------------------------------------------
--- Likelihood     : Ranking result (top variable is best ranked)
--- Likelihood     : ----------------------------------------------------------------
--- Likelihood     : Rank : Variable        : Delta Separation
--- Likelihood     : ----------------------------------------------------------------
--- Likelihood     :    1 : HT              : 2.026e-01
--- Likelihood     :    2 : Jet1Pt          : -1.315e-02
--- Likelihood     :    3 : DeltaRJet1Jet2  : -2.915e-02
--- Likelihood     :    4 : WTransverseMass : -3.787e-02
--- Likelihood     : ----------------------------------------------------------------
--- Factory        : No variable ranking supplied by classifier: PDERS
--- Factory        : No variable ranking supplied by classifier: KNN
--- Factory        : No variable ranking supplied by classifier: HMatrix
--- Fisher         : Ranking result (top variable is best ranked)
--- Fisher         : ----------------------------------------------------------------
--- Fisher         : Rank : Variable        : Discr. power
--- Fisher         : ----------------------------------------------------------------
--- Fisher         :    1 : HT              : 8.467e-02
--- Fisher         :    2 : Jet1Pt          : 7.024e-02
--- Fisher         :    3 : DeltaRJet1Jet2  : 3.259e-03
--- Fisher         :    4 : WTransverseMass : 6.449e-04
--- Fisher         : ----------------------------------------------------------------
--- Factory        : No variable ranking supplied by classifier: FDA_MT
--- MLP            : Ranking result (top variable is best ranked)
--- MLP            : ----------------------------------------------------------------
--- MLP            : Rank : Variable        : Importance
--- MLP            : ----------------------------------------------------------------
--- MLP            :    1 : HT              : 1.705e-01
--- MLP            :    2 : DeltaRJet1Jet2  : 5.423e-02
--- MLP            :    3 : Jet1Pt          : 3.493e-02
--- MLP            :    4 : WTransverseMass : 5.648e-05
--- MLP            : ----------------------------------------------------------------
--- Factory        : No variable ranking supplied by classifier: SVM_Gauss
--- BDT            : Ranking result (top variable is best ranked)
--- BDT            : ----------------------------------------------------------------
--- BDT            : Rank : Variable        : Variable Importance
--- BDT            : ----------------------------------------------------------------
--- BDT            :    1 : DeltaRJet1Jet2  : 2.834e-01
--- BDT            :    2 : HT              : 2.664e-01
--- BDT            :    3 : WTransverseMass : 2.550e-01
--- BDT            :    4 : Jet1Pt          : 1.952e-01
--- BDT            : ----------------------------------------------------------------
--- BDT            : Ranking result (top variable is best ranked)
--- BDT            : ----------------------------------------------------------------
--- BDT            : Rank : Variable        : Variable Importance
--- BDT            : ----------------------------------------------------------------
--- BDT            :    1 : HT              : 3.294e-01
--- BDT            :    2 : DeltaRJet1Jet2  : 2.423e-01
--- BDT            :    3 : WTransverseMass : 2.242e-01
--- BDT            :    4 : Jet1Pt          : 2.040e-01
--- BDT            : ----------------------------------------------------------------
--- RuleFit        : Ranking result (top variable is best ranked)
--- RuleFit        : ----------------------------------------------------------------
--- RuleFit        : Rank : Variable        : Importance
--- RuleFit        : ----------------------------------------------------------------
--- RuleFit        :    1 : HT              : 1.000e+00
--- RuleFit        :    2 : DeltaRJet1Jet2  : 8.313e-03
--- RuleFit        :    3 : Jet1Pt          : 0.000e+00
--- RuleFit        :    4 : WTransverseMass : 0.000e+00
--- RuleFit        : ----------------------------------------------------------------
--- Factory        : 
--- Factory        : Testing all classifiers...
--- Factory        : Test method: CutsGA
--- Cuts           : Reading weight file: weights/TMVAnalysis_CutsGA.weights.txt
--- Cuts           : Read method with name <Cuts> and title <CutsGA>
--- Cuts           : Classifier was trained with TMVA Version: 3.9.5
--- Cuts           : Classifier was trained with ROOT Version: 5.20/00
--- Cuts           : Create VariableTransformation "None"
--- Cuts           : Use optimization method: 'Genetic Algorithm'
--- Cuts           : Use efficiency computation method: 'Event Selection'
--- Cuts           : Use "FSmart" cuts for variable: 'HT'
--- Cuts           : Use "FSmart" cuts for variable: 'Jet1Pt'
--- Cuts           : Use "FSmart" cuts for variable: 'DeltaRJet1Jet2'
--- Cuts           : Use "FSmart" cuts for variable: 'WTransverseMass'
--- Cuts           : Option for variable: HT: 'ForceSmart' (#: 3)
--- Cuts           : Option for variable: Jet1Pt: 'ForceSmart' (#: 3)
--- Cuts           : Option for variable: DeltaRJet1Jet2: 'ForceSmart' (#: 3)
--- Cuts           : Option for variable: WTransverseMass: 'ForceSmart' (#: 3)
--- Cuts           : Read cuts optimised using Genetic Algorithm
--- Cuts           : in 100 signal efficiency bins and for 4 variables
--- Cuts           : Preparing evaluation tree... 
--- Cuts           : Elapsed time for evaluation of 4 events: 0.378 sec       
--- Factory        : Test method: Likelihood
--- Likelihood     : Reading weight file: weights/TMVAnalysis_Likelihood.weights.txt
--- Likelihood     : Read method with name <Likelihood> and title <Likelihood>
--- Likelihood     : Classifier was trained with TMVA Version: 3.9.5
--- Likelihood     : Classifier was trained with ROOT Version: 5.20/00
--- Likelihood     : Create VariableTransformation "None"
--- Likelihood     : Preparing evaluation tree... 
--- Likelihood     : Elapsed time for evaluation of 4 events: 0.452 sec       
--- Factory        : Test method: LikelihoodPCA
--- Likelihood     : Reading weight file: weights/TMVAnalysis_LikelihoodPCA.weights.txt
--- Likelihood     : Read method with name <Likelihood> and title <LikelihoodPCA>
--- Likelihood     : Classifier was trained with TMVA Version: 3.9.5
--- Likelihood     : Classifier was trained with ROOT Version: 5.20/00
--- Likelihood     : Create VariableTransformation "PCA"
--- Likelihood     : Use principal component transformation
--- Likelihood     : Preparing evaluation tree... 
--- Likelihood     : Elapsed time for evaluation of 4 events: 0.417 sec       
--- Factory        : Test method: PDERS
--- PDERS          : Reading weight file: weights/TMVAnalysis_PDERS.weights.txt
--- PDERS          : Read method with name <PDERS> and title <PDERS>
--- PDERS          : Classifier was trained with TMVA Version: 3.9.5
--- PDERS          : Classifier was trained with ROOT Version: 5.20/00
--- PDERS          : Create VariableTransformation "None"
--- PDERS          : Preparing evaluation tree... 
--- PDERS          : Elapsed time for evaluation of 4 events: 0.356 sec       
--- Factory        : Test method: KNN
--- KNN            : Reading weight file: weights/TMVAnalysis_KNN.weights.txt
--- KNN            : Read method with name <KNN> and title <KNN>
--- KNN            : Classifier was trained with TMVA Version: 3.9.5
--- KNN            : Classifier was trained with ROOT Version: 5.20/00
--- KNN            : Create VariableTransformation "None"
--- KNN            : Starting ReadWeightsFromStream(istream& is) function...
--- KNN            : Erasing 1339 previously stored events
--- KNN            : Read 1339 events from text file
--- KNN            : Creating kd-tree with 1339 events
--- ModulekNN      : Computing scale factor for 1d distributions: (ifrac, bottom, top) = (80%, 10%, 90%)
--- ModulekNN      : Optimizing tree for 4 variables with 1339 values
--- ModulekNN      : <Fill> Class 1 has      516 events
--- ModulekNN      : <Fill> Class 2 has      823 events
--- KNN            : Preparing evaluation tree... 
--- KNN            : Elapsed time for evaluation of 4 events: 0.499 sec       
--- Factory        : Test method: HMatrix
--- HMatrix        : Reading weight file: weights/TMVAnalysis_HMatrix.weights.txt
--- HMatrix        : Read method with name <HMatrix> and title <HMatrix>
--- HMatrix        : Classifier was trained with TMVA Version: 3.9.5
--- HMatrix        : Classifier was trained with ROOT Version: 5.20/00
--- HMatrix        : Create VariableTransformation "None"
--- HMatrix        : Preparing evaluation tree... 
--- HMatrix        : Elapsed time for evaluation of 4 events: 0.387 sec       
--- Factory        : Test method: Fisher
--- Fisher         : Reading weight file: weights/TMVAnalysis_Fisher.weights.txt
--- Fisher         : Read method with name <Fisher> and title <Fisher>
--- Fisher         : Classifier was trained with TMVA Version: 3.9.5
--- Fisher         : Classifier was trained with ROOT Version: 5.20/00
--- Fisher         : Create VariableTransformation "None"
--- Fisher         : Preparing evaluation tree... 
--- Fisher         : Elapsed time for evaluation of 4 events: 0.397 sec       
--- Factory        : Test method: FDA_MT
--- FDA            : Reading weight file: weights/TMVAnalysis_FDA_MT.weights.txt
--- FDA            : Read method with name <FDA> and title <FDA_MT>
--- FDA            : Classifier was trained with TMVA Version: 3.9.5
--- FDA            : Classifier was trained with ROOT Version: 5.20/00
--- FDA            : Create VariableTransformation "None"
--- FDA            : User-defined formula string       : "(0)+(1)*x0+(2)*x1+(3)*x2+(4)*x3"
--- FDA            : TFormula-compatible formula string: "[0]+[1]*[5]+[2]*[6]+[3]*[7]+[4]*[8]"
--- FDA            : Creating and compiling formula
--- FDA_Fitter_M...: Parsing option string: 
--- FDA_Fitter_M...: "!V=False:!H=True:!Formula=(0)+(1)*x0+(2)*x1+(3)*x2+(4)*x3:!ParRanges=(-1,1);(-10,10);(-10,10);(-10,10);(-10,10):!FitMethod=MINUIT:!D=False:!Normalise=False:!VarTransform=None:!VarTransformType=Signal:!NbinsMVAPdf=60:!NsmoothMVAPdf=2:!VerboseLevel=Default:!CreateMVAPdfs=False:!TxtWeightFilesOnly=True:!Converger=None"
--- FDA_Fitter_M...: The following options are set:
--- FDA_Fitter_M...: - By User:
--- FDA_Fitter_M...:     <none>
--- FDA_Fitter_M...: - Default:
--- FDA_Fitter_M...:     ErrorLevel: "1" [TMinuit: error level: 0.5=logL fit, 1=chi-squared fit]
--- FDA_Fitter_M...:     PrintLevel: "-1" [TMinuit: output level: -1=least, 0, +1=all garbage]
--- FDA_Fitter_M...:     FitStrategy: "2" [TMinuit: fit strategy: 2=best]
--- FDA_Fitter_M...:     PrintWarnings: "False" [TMinuit: suppress warnings]
--- FDA_Fitter_M...:     UseImprove: "True" [TMinuit: use IMPROVE]
--- FDA_Fitter_M...:     UseMinos: "True" [TMinuit: use MINOS]
--- FDA_Fitter_M...:     SetBatch: "False" [TMinuit: use batch mode]
--- FDA_Fitter_M...:     MaxCalls: "1000" [TMinuit: approximate maximum number of function calls]
--- FDA_Fitter_M...:     Tolerance: "0.1" [TMinuit: tolerance to the function value at the minimum]
--- FDA_Fitter_M...: <MinuitFitter> Init 
--- FDA            : Preparing evaluation tree... 
--- FDA            : Elapsed time for evaluation of 4 events: 0.386 sec       
--- Factory        : Test method: MLP
--- MLP            : Reading weight file: weights/TMVAnalysis_MLP.weights.txt
--- MLP            : Read method with name <MLP> and title <MLP>
--- MLP            : Classifier was trained with TMVA Version: 3.9.5
--- MLP            : Classifier was trained with ROOT Version: 5.20/00
--- MLP            : Create VariableTransformation "None"
--- MLP            : Building Network
--- MLP            : Initializing weights
--- MLP            : Forcing weights
--- MLP            : Preparing evaluation tree... 
--- MLP            : Elapsed time for evaluation of 4 events: 0.397 sec       
--- Factory        : Test method: SVM_Gauss
--- SVM            : Reading weight file: weights/TMVAnalysis_SVM_Gauss.weights.txt
--- SVM            : Read method with name <SVM> and title <SVM_Gauss>
--- SVM            : Classifier was trained with TMVA Version: 3.9.5
--- SVM            : Classifier was trained with ROOT Version: 5.20/00
--- SVM            : Create VariableTransformation "None"
--- SVM            : Preparing evaluation tree... 
--- SVM            : Elapsed time for evaluation of 4 events: 0.366 sec       
--- Factory        : Test method: BDT
--- BDT            : Reading weight file: weights/TMVAnalysis_BDT.weights.txt
--- BDT            : Read method with name <BDT> and title <BDT>
--- BDT            : Classifier was trained with TMVA Version: 3.9.5
--- BDT            : Classifier was trained with ROOT Version: 5.20/00
--- BDT            : Create VariableTransformation "None"
--- BDT            : Read 400 Decision trees
--- BDT            : Preparing evaluation tree... 
--- BDT            : Elapsed time for evaluation of 4 events: 0.445 sec       
--- Factory        : Test method: BDTD
--- BDT            : Reading weight file: weights/TMVAnalysis_BDTD.weights.txt
--- BDT            : Read method with name <BDT> and title <BDTD>
--- BDT            : Classifier was trained with TMVA Version: 3.9.5
--- BDT            : Classifier was trained with ROOT Version: 5.20/00
--- BDT            : Create VariableTransformation "Decorrelate"
--- BDT            : Read 400 Decision trees
--- BDT            : Preparing evaluation tree... 
--- BDT            : Elapsed time for evaluation of 4 events: 0.403 sec       
--- Factory        : Test method: RuleFit
--- RuleFit        : Reading weight file: weights/TMVAnalysis_RuleFit.weights.txt
--- RuleFit        : Read method with name <RuleFit> and title <RuleFit>
--- RuleFit        : Classifier was trained with TMVA Version: 3.9.5
--- RuleFit        : Classifier was trained with ROOT Version: 5.20/00
--- RuleFit        : Create VariableTransformation "None"
--- RuleFit        : Preparing evaluation tree... 
--- RuleFit        : Elapsed time for evaluation of 4 events: 0.416 sec       
--- Factory        : Evaluating all classifiers...
--- Factory        : Evaluate classifier: CutsGA
--- Factory        : Evaluate classifier: Likelihood
--- Likelihood     : Loop over test events and fill histograms with classifier response ...
--- Factory        : Evaluate classifier: LikelihoodPCA
--- Likelihood     : Loop over test events and fill histograms with classifier response ...
--- Factory        : Evaluate classifier: PDERS
--- PDERS          : Loop over test events and fill histograms with classifier response ...
--- Factory        : Evaluate classifier: KNN
--- KNN            : Loop over test events and fill histograms with classifier response ...
--- Factory        : Evaluate classifier: HMatrix
--- HMatrix        : Loop over test events and fill histograms with classifier response ...
--- Factory        : Evaluate classifier: Fisher
--- Fisher         : Loop over test events and fill histograms with classifier response ...
--- Fisher         : Also filling probability and rarity histograms (on request) ...
--- Factory        : Evaluate classifier: FDA_MT
--- FDA            : Loop over test events and fill histograms with classifier response ...
--- Factory        : Evaluate classifier: MLP
--- MLP            : Loop over test events and fill histograms with classifier response ...
--- Factory        : Evaluate classifier: SVM_Gauss
--- SVM            : Loop over test events and fill histograms with classifier response ...
--- Factory        : Evaluate classifier: BDT
--- BDT            : Loop over test events and fill histograms with classifier response ...
--- Factory        : Evaluate classifier: BDTD
--- BDT            : Loop over test events and fill histograms with classifier response ...
--- Factory        : Evaluate classifier: RuleFit
--- RuleFit        : Loop over test events and fill histograms with classifier response ...
--- Factory        : 
--- Factory        : Inter-MVA correlation matrix (signal):
--- Factory        : -------------------------------------------------------------------------------------------------------------------------
--- Factory        :                Likelihood LikelihoodPCA   PDERS     KNN HMatrix  Fisher  FDA_MT     MLP SVM_Gauss     BDT    BDTD RuleFit
--- Factory        :    Likelihood:     +1.000        +1.000  -1.000  -1.000  -1.000  -1.000  -1.000  -1.000    -1.000  -1.000  -1.000  +1.000
--- Factory        : LikelihoodPCA:     +1.000        +1.000  -1.000  -1.000  -1.000  -1.000  -1.000  -1.000    -1.000  -1.000  -1.000  +1.000
--- Factory        :         PDERS:     -1.000        -1.000  +1.000  +1.000  +1.000  +1.000  +1.000  +1.000    +1.000  +1.000  +1.000  -1.000
--- Factory        :           KNN:     -1.000        -1.000  +1.000  +1.000  +1.000  +1.000  +1.000  +1.000    +1.000  +1.000  +1.000  -1.000
--- Factory        :       HMatrix:     -1.000        -1.000  +1.000  +1.000  +1.000  +1.000  +1.000  +1.000    +1.000  +1.000  +1.000  -1.000
--- Factory        :        Fisher:     -1.000        -1.000  +1.000  +1.000  +1.000  +1.000  +1.000  +1.000    +1.000  +1.000  +1.000  -1.000
--- Factory        :        FDA_MT:     -1.000        -1.000  +1.000  +1.000  +1.000  +1.000  +1.000  +1.000    +1.000  +1.000  +1.000  -1.000
--- Factory        :           MLP:     -1.000        -1.000  +1.000  +1.000  +1.000  +1.000  +1.000  +1.000    +1.000  +1.000  +1.000  -1.000
--- Factory        :     SVM_Gauss:     -1.000        -1.000  +1.000  +1.000  +1.000  +1.000  +1.000  +1.000    +1.000  +1.000  +1.000  -1.000
--- Factory        :           BDT:     -1.000        -1.000  +1.000  +1.000  +1.000  +1.000  +1.000  +1.000    +1.000  +1.000  +1.000  -1.000
--- Factory        :          BDTD:     -1.000        -1.000  +1.000  +1.000  +1.000  +1.000  +1.000  +1.000    +1.000  +1.000  +1.000  -1.000
--- Factory        :       RuleFit:     +1.000        +1.000  -1.000  -1.000  -1.000  -1.000  -1.000  -1.000    -1.000  -1.000  -1.000  +1.000
--- Factory        : -------------------------------------------------------------------------------------------------------------------------
--- Factory        : 
--- Factory        : Inter-MVA correlation matrix (background):
--- Factory        : -------------------------------------------------------------------------------------------------------------------------
--- Factory        :                Likelihood LikelihoodPCA   PDERS     KNN HMatrix  Fisher  FDA_MT     MLP SVM_Gauss     BDT    BDTD RuleFit
--- Factory        :    Likelihood:     +1.000        +1.000  +1.000  +1.000  -1.000  +1.000  +1.000  +1.000    -1.000  +1.000  +1.000  +1.000
--- Factory        : LikelihoodPCA:     +1.000        +1.000  +1.000  +1.000  -1.000  +1.000  +1.000  +1.000    -1.000  +1.000  +1.000  +1.000
--- Factory        :         PDERS:     +1.000        +1.000  +1.000  +1.000  -1.000  +1.000  +1.000  +1.000    -1.000  +1.000  +1.000  +1.000
--- Factory        :           KNN:     +1.000        +1.000  +1.000  +1.000  -1.000  +1.000  +1.000  +1.000    -1.000  +1.000  +1.000  +1.000
--- Factory        :       HMatrix:     -1.000        -1.000  -1.000  -1.000  +1.000  -1.000  -1.000  -1.000    +1.000  -1.000  -1.000  -1.000
--- Factory        :        Fisher:     +1.000        +1.000  +1.000  +1.000  -1.000  +1.000  +1.000  +1.000    -1.000  +1.000  +1.000  +1.000
--- Factory        :        FDA_MT:     +1.000        +1.000  +1.000  +1.000  -1.000  +1.000  +1.000  +1.000    -1.000  +1.000  +1.000  +1.000
--- Factory        :           MLP:     +1.000        +1.000  +1.000  +1.000  -1.000  +1.000  +1.000  +1.000    -1.000  +1.000  +1.000  +1.000
--- Factory        :     SVM_Gauss:     -1.000        -1.000  -1.000  -1.000  +1.000  -1.000  -1.000  -1.000    +1.000  -1.000  -1.000  -1.000
--- Factory        :           BDT:     +1.000        +1.000  +1.000  +1.000  -1.000  +1.000  +1.000  +1.000    -1.000  +1.000  +1.000  +1.000
--- Factory        :          BDTD:     +1.000        +1.000  +1.000  +1.000  -1.000  +1.000  +1.000  +1.000    -1.000  +1.000  +1.000  +1.000
--- Factory        :       RuleFit:     +1.000        +1.000  +1.000  +1.000  -1.000  +1.000  +1.000  +1.000    -1.000  +1.000  +1.000  +1.000
--- Factory        : -------------------------------------------------------------------------------------------------------------------------
--- Factory        : 
--- Factory        : The following "overlap" matrices contain the fraction of events for which 
--- Factory        : the MVAs 'i' and 'j' have returned conform answers about "signal-likeness"
--- Factory        : An event is signal-like, if its MVA output exceeds the following value:
--- Factory        : -----------------------------
--- Factory        :        Method:     Cut value:
--- Factory        : -----------------------------
--- Factory        :    Likelihood:        +0.620
--- Factory        : LikelihoodPCA:        +0.600
--- Factory        :         PDERS:        +0.599
--- Factory        :           KNN:        +0.484
--- Factory        :       HMatrix:        +0.016
--- Factory        :        Fisher:        -0.001
--- Factory        :        FDA_MT:        +0.504
--- Factory        :           MLP:        -0.426
--- Factory        :     SVM_Gauss:        +0.593
--- Factory        :           BDT:        -0.064
--- Factory        :          BDTD:        +0.004
--- Factory        :       RuleFit:        -2.297
--- Factory        : -----------------------------
--- Factory        : which correspond to the working point: eff(signal) = 1 - eff(background)
--- Factory        : Note: no correlations and overlap with cut method are provided at present
--- Factory        : 
--- Factory        : Inter-MVA overlap matrix (signal):
--- Factory        : -------------------------------------------------------------------------------------------------------------------------
--- Factory        :                Likelihood LikelihoodPCA   PDERS     KNN HMatrix  Fisher  FDA_MT     MLP SVM_Gauss     BDT    BDTD RuleFit
--- Factory        :    Likelihood:     +1.000        +0.500  +1.000  +1.000  +0.500  +0.500  +0.500  +0.500    +0.500  +0.500  +0.500  +0.500
--- Factory        : LikelihoodPCA:     +0.500        +1.000  +0.500  +0.500  +0.000  +0.000  +0.000  +0.000    +0.000  +0.000  +0.000  +1.000
--- Factory        :         PDERS:     +1.000        +0.500  +1.000  +1.000  +0.500  +0.500  +0.500  +0.500    +0.500  +0.500  +0.500  +0.500
--- Factory        :           KNN:     +1.000        +0.500  +1.000  +1.000  +0.500  +0.500  +0.500  +0.500    +0.500  +0.500  +0.500  +0.500
--- Factory        :       HMatrix:     +0.500        +0.000  +0.500  +0.500  +1.000  +1.000  +1.000  +1.000    +1.000  +1.000  +1.000  +0.000
--- Factory        :        Fisher:     +0.500        +0.000  +0.500  +0.500  +1.000  +1.000  +1.000  +1.000    +1.000  +1.000  +1.000  +0.000
--- Factory        :        FDA_MT:     +0.500        +0.000  +0.500  +0.500  +1.000  +1.000  +1.000  +1.000    +1.000  +1.000  +1.000  +0.000
--- Factory        :           MLP:     +0.500        +0.000  +0.500  +0.500  +1.000  +1.000  +1.000  +1.000    +1.000  +1.000  +1.000  +0.000
--- Factory        :     SVM_Gauss:     +0.500        +0.000  +0.500  +0.500  +1.000  +1.000  +1.000  +1.000    +1.000  +1.000  +1.000  +0.000
--- Factory        :           BDT:     +0.500        +0.000  +0.500  +0.500  +1.000  +1.000  +1.000  +1.000    +1.000  +1.000  +1.000  +0.000
--- Factory        :          BDTD:     +0.500        +0.000  +0.500  +0.500  +1.000  +1.000  +1.000  +1.000    +1.000  +1.000  +1.000  +0.000
--- Factory        :       RuleFit:     +0.500        +1.000  +0.500  +0.500  +0.000  +0.000  +0.000  +0.000    +0.000  +0.000  +0.000  +1.000
--- Factory        : -------------------------------------------------------------------------------------------------------------------------
--- Factory        : 
--- Factory        : Inter-MVA overlap matrix (background):
--- Factory        : -------------------------------------------------------------------------------------------------------------------------
--- Factory        :                Likelihood LikelihoodPCA   PDERS     KNN HMatrix  Fisher  FDA_MT     MLP SVM_Gauss     BDT    BDTD RuleFit
--- Factory        :    Likelihood:     +1.000        +1.000  +1.000  +1.000  +1.000  +0.500  +0.500  +1.000    +0.500  +1.000  +1.000  +1.000
--- Factory        : LikelihoodPCA:     +1.000        +1.000  +1.000  +1.000  +1.000  +0.500  +0.500  +1.000    +0.500  +1.000  +1.000  +1.000
--- Factory        :         PDERS:     +1.000        +1.000  +1.000  +1.000  +1.000  +0.500  +0.500  +1.000    +0.500  +1.000  +1.000  +1.000
--- Factory        :           KNN:     +1.000        +1.000  +1.000  +1.000  +1.000  +0.500  +0.500  +1.000    +0.500  +1.000  +1.000  +1.000
--- Factory        :       HMatrix:     +1.000        +1.000  +1.000  +1.000  +1.000  +0.500  +0.500  +1.000    +0.500  +1.000  +1.000  +1.000
--- Factory        :        Fisher:     +0.500        +0.500  +0.500  +0.500  +0.500  +1.000  +1.000  +0.500    +0.000  +0.500  +0.500  +0.500
--- Factory        :        FDA_MT:     +0.500        +0.500  +0.500  +0.500  +0.500  +1.000  +1.000  +0.500    +0.000  +0.500  +0.500  +0.500
--- Factory        :           MLP:     +1.000        +1.000  +1.000  +1.000  +1.000  +0.500  +0.500  +1.000    +0.500  +1.000  +1.000  +1.000
--- Factory        :     SVM_Gauss:     +0.500        +0.500  +0.500  +0.500  +0.500  +0.000  +0.000  +0.500    +1.000  +0.500  +0.500  +0.500
--- Factory        :           BDT:     +1.000        +1.000  +1.000  +1.000  +1.000  +0.500  +0.500  +1.000    +0.500  +1.000  +1.000  +1.000
--- Factory        :          BDTD:     +1.000        +1.000  +1.000  +1.000  +1.000  +0.500  +0.500  +1.000    +0.500  +1.000  +1.000  +1.000
--- Factory        :       RuleFit:     +1.000        +1.000  +1.000  +1.000  +1.000  +0.500  +0.500  +1.000    +0.500  +1.000  +1.000  +1.000
--- Factory        : -------------------------------------------------------------------------------------------------------------------------
--- Factory        : 
--- Factory        : Evaluation results ranked by best signal efficiency and purity (area)
--- Factory        : -----------------------------------------------------------------------------
--- Factory        : MVA              Signal efficiency at bkg eff. (error):  |  Sepa-    Signifi-
--- Factory        : Methods:         @B=0.01    @B=0.10    @B=0.30    Area   |  ration:  cance:  
--- Factory        : -----------------------------------------------------------------------------
--- Factory        : Likelihood     : 1.000(14)  1.000(14)  1.000(14)  1.000  |  0.912    1.332
--- Factory        : LikelihoodPCA  : 1.000(14)  1.000(14)  1.000(14)  1.000  |  0.905    1.161
--- Factory        : PDERS          : 1.000(14)  1.000(14)  1.000(14)  1.000  |  0.912    2.675
--- Factory        : KNN            : 1.000(14)  1.000(14)  1.000(14)  1.000  |  0.958    15.162
--- Factory        : HMatrix        : 1.000(14)  1.000(14)  1.000(14)  1.000  |  0.903    3.425
--- Factory        : MLP            : 1.000(14)  1.000(14)  1.000(14)  1.000  |  0.912    1.270
--- Factory        : BDTD           : 1.000(14)  1.000(14)  1.000(14)  1.000  |  0.912    4.728
--- Factory        : RuleFit        : 1.000(14)  1.000(14)  1.000(14)  1.000  |  0.920    1.139
--- Factory        : CutsGA         : 0.685(328)  0.686(328)  0.689(327)  0.855  |  0.000    0.000
--- Factory        : Fisher         : 0.495(321)  0.496(321)  0.499(321)  0.626  |  0.912    0.948
--- Factory        : FDA_MT         : 0.495(321)  0.496(321)  0.499(321)  0.626  |  0.912    1.035
--- Factory        : SVM_Gauss      : 0.495(321)  0.496(321)  0.498(321)  0.500  |  0.908    0.059
--- Factory        : BDT            : 0.495(321)  0.496(321)  0.498(321)  0.500  |  0.892    0.544
--- Factory        : -----------------------------------------------------------------------------
--- Factory        : 
--- Factory        : Testing efficiency compared to training efficiency (overtraining check)
--- Factory        : -----------------------------------------------------------------------------
--- Factory        : MVA           Signal efficiency: from test sample (from traing sample) 
--- Factory        : Methods:         @B=0.01             @B=0.10            @B=0.30   
--- Factory        : -----------------------------------------------------------------------------
--- Factory        : Likelihood     : 1.000 (0.497)       1.000 (1.000)      1.000 (1.000)
--- Factory        : LikelihoodPCA  : 1.000 (0.495)       1.000 (0.499)      1.000 (1.000)
--- Factory        : PDERS          : 1.000 (0.497)       1.000 (1.000)      1.000 (1.000)
--- Factory        : KNN            : 1.000 (1.000)       1.000 (1.000)      1.000 (1.000)
--- Factory        : HMatrix        : 1.000 (0.498)       1.000 (1.000)      1.000 (1.000)
--- Factory        : MLP            : 1.000 (0.495)       1.000 (0.499)      1.000 (1.000)
--- Factory        : BDTD           : 1.000 (0.500)       1.000 (1.000)      1.000 (1.000)
--- Factory        : RuleFit        : 1.000 (0.497)       1.000 (1.000)      1.000 (1.000)
--- Factory        : CutsGA         : 0.685 (0.111)       0.686 (0.492)      0.689 (0.702)
--- Factory        : Fisher         : 0.495 (0.495)       0.496 (0.498)      0.499 (0.503)
--- Factory        : FDA_MT         : 0.495 (0.495)       0.496 (0.498)      0.499 (0.503)
--- Factory        : SVM_Gauss      : 0.495 (0.495)       0.496 (0.496)      0.498 (0.499)
--- Factory        : BDT            : 0.495 (0.495)       0.496 (0.497)      0.498 (0.502)
--- Factory        : -----------------------------------------------------------------------------
--- Factory        : 
--- Factory        : Write Test Tree 'TestTree' to file
=== wrote root file TMVAout.root

=== TMVAnalysis is done!

--- Launch TMVA GUI to view input file: TMVAout.root
--- Reading keys ...
=== Note: inactive buttons indicate that the corresponding classifiers were not trained ===

-- PatRyan - 14 Nov 2008
Topic revision: r1 - 14 Nov 2008, PatRyan
 

This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback