-
Notifications
You must be signed in to change notification settings - Fork 0
License
oturkot/xFitter_CI
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
 |  | |||
Repository files navigation
------------------------------------------------ xFitter --- PDF fit program from HERA. ------------------------------------------------ xFitter is an open source QCD fit framework desinged to extract PDFs and assess the impact of new data. The xFitter project is is a common initiative by the H1 and ZEUS collaborations and extended to the LHC collaborations to provide precision QCD analyses. xFitter has been used as one of the main software packages for the determination of the HERA proton parton densities (PDFs), HERAPDFs. xFitter has been used to produce the ATLAS-epWZ12 (NNLO, available in LHAPDF5.9.1 and LHAPDFv6.1.X), LHeC (NLO) PDF sets. For further details please check xfitter.org web page. The current package includes code to fit DIS inclusive cross section data, Drell-Yan, jet and ttbar processes (using APPLGRID and FastNLO interfaces). The program is distributed under the GPL v3 license, see LICENCE file for more details. The program uses the QCD evolution package QCDNUM developed by M. Botje and includes other parts of the code: -- VFNS from R. Thorne, G. Watt (MSTW) @ LO, NLO, NNLO -- VFNS from F. Olness (ACOT) @ LO, NLO and NNLO, NNNLO corrections for FL -- VFNS from APFEL (FONLL) @ LO, NLO and NNLO -- FFNS from S. Alekhin (ABM) @ NLO, NNLO (pole and running heavy quark masses) -- DY LO+k-factor calculation from A. Sapronov -- PDF error estimation from J. Pumplin -- DIS electroweak corrections from H. Spiesberger with Jegerlehner's hadronic parametric contribution (based on e+,e- data) -- Bayesian reweighting tool from A. Guffanti (a la NNPDF) and based on EIGENVECTORS from G. Watt (a la MSTW). -- DIPOLE models (GBW, IIM, BGS) -- TDM (uPDFs) as an alternative to DGLAP formalism (J. Jung) -- Diffractive PDFs (W. Slominski) -- total ttbar production cross sections via HATHOR (S. Moch et al.) -- differential ttbar production cross sections with DiffTop (M. Guzzi, S. Moch et al.) -- MNR calculation for heavy quark production (Mangano, Nason and Ridolfi, implemented by O.Zenaiev) If the results obtained with the program are to be included in a scientific publication, please use the citations as suggested by the REFERENCES file. For support information, please visit https://wiki-zeuthen.desy.de/xFitter/xFitter ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1) Installation and Usage Instructions: please refer to the INSTALLATION file. ===================== 2) BRIEF DESCRIPTION ===================== a) Steering cards -------------------- The software behaviour is controlled by two files with steering commands. These files have predefined names: steering.txt -- controls main "stable" (un-modified during minimisation) parameters. The file also contains names of data files to be fitted to, definition of kinematic cuts minuit.in.txt -- controls minimisation parameters and minimisation strategy. Standard Minuit commands can be provided in this file ewparam.txt -- controls electroweak parameters. b) Inclusion of data files ------------------------------- Inclusion of the data files is controlled by &InFiles namelist in the steering.txt file. For example, by default the following four HERA-I files are included: &InFiles NInputFiles = 7 InputFileNames(1) = 'datafiles/hera/h1zeusCombined/inclusiveDis/1506.06042/HERA1+2_NCep_920.dat' InputFileNames(2) = 'datafiles/hera/h1zeusCombined/inclusiveDis/1506.06042/HERA1+2_NCep_820.dat' InputFileNames(3) = 'datafiles/hera/h1zeusCombined/inclusiveDis/1506.06042/HERA1+2_NCep_575.dat' InputFileNames(4) = 'datafiles/hera/h1zeusCombined/inclusiveDis/1506.06042/HERA1+2_NCep_460.dat' InputFileNames(5) = 'datafiles/hera/h1zeusCombined/inclusiveDis/1506.06042/HERA1+2_NCem.dat' InputFileNames(6) = 'datafiles/hera/h1zeusCombined/inclusiveDis/1506.06042/HERA1+2_CCep.dat' InputFileNames(7) = 'datafiles/hera/h1zeusCombined/inclusiveDis/1506.06042/HERA1+2_CCem.dat' &End To include more files: -- Increase NInputFiles -- Specify InputFileNames() another option would be: NInputFiles = 7 InputFileNames = 'datafiles/hera/h1zeusCombined/inclusiveDis/1506.06042/HERA1+2_NCep_920.dat' 'datafiles/hera/h1zeusCombined/inclusiveDis/1506.06042/HERA1+2_NCep_820.dat' 'datafiles/hera/h1zeusCombined/inclusiveDis/1506.06042/HERA1+2_NCep_575.dat' 'datafiles/hera/h1zeusCombined/inclusiveDis/1506.06042/HERA1+2_NCep_460.dat' 'datafiles/hera/h1zeusCombined/inclusiveDis/1506.06042/HERA1+2_NCem.dat' 'datafiles/hera/h1zeusCombined/inclusiveDis/1506.06042/HERA1+2_CCep.dat' 'datafiles/hera/h1zeusCombined/inclusiveDis/1506.06042/HERA1+2_CCem.dat' -> then the order does matter of the files listed. Inclusion of the statistical or systematic correlations of the data in the fit is done via &InCorr namelist: &InCorr ! Number of correlation (statistical, systematical or full) files NCorrFiles = 1 CorrFileNames(1) = 'datafiles/hera/H1_NormInclJets_HighQ2_99-07___H1_NormInclJets_HighQ2_99-07.corr' &End in this case the statistical correlations for H1_NormInclJets_HighQ2_99-07 data file are included (the method also allows to include correlations between data sets via file names, i.e: H1_NormInclJets_HighQ2_99-07___H1_InclJets_HighQ2_99-00.dat.corr) As additional option for data sets with covariance matrix, it is possible to convert covariance matrix to nuisance parameter representation (following the prescription suggested by J. Gao and P. Nadolsky in arXiv:1401.0013): &CovarToNuisance ! Global switch for using nuisance param representation for covariance mat. LConvertCovToNui = .true. ! Tolerance -- zero means exact transformation Tolerance = 0.0 ! The following lines allow to adjust error scaling properties ! (default: :M - multiplicative, A - additive) DataName = 'CMS electon Asymmetry rapidity', 'CMS W muon asymmetry' DataSystType = ':A', ':A' &End c) Data files format -------------------------- Experimental data are provided by the standard ASCII text files. The files contain a "header" which describes the data format and the "data" in terms of a 2-dimensional table. Each line of the data table corresponds to a data point, the meaning of the columns is specified in the file header. For example, a header for HERA combined H1-ZEUS data for e+p neutral current scattering cross section is given in the file datafiles/hera/h1zeusCombined/inclusiveDis/1506.06042/HERA1+2_NCep_920.dat The format of the file follows standard "namelist" conventions. Comments start with exclamation mark. Pre-defined variables are: Name --- (string) provides a name of the data set Reaction --- (string) reaction type of the data set. Reaction type is used to trigger corresponding theory calculation. The following reaction types are currently supported by the xFitter: 'NC e+-p' -- double differential NC ep scattering 'CC e+-p' -- double differential CC ep scattering 'NC e+-p charm' -- charm production in the NC ep scattering 'CC pp' -- single differential d sigma (W^{+,-})/d eta production and W asymmetry at pp and ppbar colliders (LO+kfactors and APPLGRID interface) 'NC pp' -- single differential d sigma Z/d y_Z at pp and ppbar colliders (LO+kfactors and APPLGRID interface) 'pp jets APPLGRID' -- pp->inclusive jet production, using APPLGRID 'FastNLO ep jets' -- ep jets calculated with the help of fastnlo v.2.0 table 'FastNLO ep jets normalised' -- fastnlo ep jets normalised to inclusive DIS cross section 'muon p' -- proton structure function in the muon-proton DIS scattering 'DUMMY' -- Dummy reaction type to be used for testing the data format. In this case the central values of data are ignored and theory predictions are used, chi2 will be zero. NData --- (integer) specifies number of data points in the file. This corresponds to the number of table rows which follow after the header. NColumn --- (integer) number of columns in the data table. ColumnType --- (array of strings) Defines layout of the data table. The following column types are pre-defined: 'Flag', 'Bin', 'Sigma', 'Error' and 'Dummy' The keywords are case sensitive. 'Flag' cortrols the treatment of specific bin (0/1 - exclude/include the bin in the fit, 1 by default), 'Bin' correspond to an abstract bin definition, 'Sigma' corresponds to the data measurement, 'Error' - to various type of uncertainties and 'Dummy' indicates that the column should be ignored. ColumnName --- (array of strings) Defines names of the columns. The meaning of the name depends on the ColumnType. For ColumnType 'Flag' it is 'binFlag', For ColumnType 'Bin', ColumnName gives a name of the abstract bin. The abstract bins can contain any variable names, but some of them must be present for correct cross section calculation. For example, 'x', 'Q2' and 'y' are required for DIS NC cross-section calculation. For ColumnType 'Sigma', ColumnName provides a label for the observable, which can be any string. For ColumnType 'Error', the following names have special meaning: 'stat' -- specifies column with statistical uncertainties 'uncor' -- specifies column with uncorrelated uncertainties 'total' -- specifies column with total uncertainties. Total uncertainties are not used in the fit, however there is an additional check is performed if 'total' column is specified: sum in quadrature of statistical, uncorrelated and correlated systematic uncertainties is compared to the total and a warning is issued if they differ significantly. 'ignore' - specifies column to be ignored (for special studies). Other names specifies columns of correlated systematic uncertainty. For a given data file, each column of the correlated uncertainty must have unique name. To specify correlation across data files, same name must be used for different files. SystScales --- (array of float, optional) For special studies, systematic uncertainties can be scaled The numbering of uncertainties starts from the first column with the ColumnType 'Error'. For example, setting SystScale(1) = 2. in datafiles/H1ZEUS_NC_e-p_HERA1.0.dat would scale stat. uncertainty by factor of two. Percent --- (array of bool) For each uncertainty specify if it is given in absolute ("false") or in percent ("true"). The numbering of uncertainties starts from the first column with the ColumnType 'Error' (see example above). NInfo --- (integer) Calculation of the cross-section predictions may require additional information about the data set. The number of information strings is given by NInfo CInfo --- (array of strings) Names of the information strings. Several of them are predefined for different cross-section calculations. DataInfo --- (array of float) Values, corresponding to CInfo names. IndexDataset -- (integer) Internal H1 Fitter index of the data set. Provide unique numbers to get extra info for chi2/dof for each data set. To index new data sets please refer to the table available in www.xfitter.org TheoryInfoFile --- (string) Optional additional theory file with extra information for cross-section calculation. This could be k-factors, APPLGRID file or FastNLO table. TheoryType --- (string) Theory file type: 'kfactor', 'applgrid', 'FastNLO' or 'expression'. The last one gives more flexibility in theory definition, allowing to set a simple formula in 'TheorExpr' string variable with preliminary defined terms in 'TermName', 'TermType' (can be 'kfactor', 'applgrid' or 'virtgrid') and 'TermSource' (the files from where the predictions are taken). TermInfo can be a special option string for fast cross section evaluation. See what options are supported for each theory type. The expression recognises simple arithmetic operations (+,-,/,*) and 'sum()' function, returning predictions summed over bins. Example: -------------------------- TheoryType = 'expression' TermName = 'A1', 'K' TermType = 'applgrid','kfactor' TermInfo = '','' TermSource = 'path/to/grid.root' , 'path/to/kfactor.txt' TheorExpr= 'K*A1/sum(A1)' -------------------------- The expression also recognises numerical terms, e.g. 'k*A+0.1' (due to technical limitations, no spaces are allowed in 'TheorExpr' value). By default the numeric result of the expression is divided by the bin width. In order to obtain initial values or use 'sum()' operation (integral of the differential distribution e.g. for normalization purposes) one should add '_norm' suffix to the the TermType of 'applgrid' and 'virtgrid'. For more information on the 'virtgrid' definition please see program's manual. NKFactor --- (integer) For kfactor files, number of columns in TheoryInfoFile KFactorNames - (array of strings) For kfactor files, names of columns in TheoryInfoFile PlotDesc --- contains options for drawing tools, i.e.: PlotN - number of plots for data set(s), i.e. SubPlots PlotDefColumn - data variable used to divide the data in SubPlots PlotVarColumn - variable providing bin center information (to be used only if bin edges are missing) PlotDefValue - ranges of PlotDefColumn used to divide the data in SubPlots PlotOptions(N)- additinal information displayed on the plots like experiment, process, axes titles, example: PlotOptions(1)='Experiment:H1 ZEUS@ExtraLabel:e^{-}p CC @XTitle: x @YTitle: d#sigma/dx @Title:Q^{2} = 300 @Xlog@Ylog' c.a) FastNLO specific data format ----------------------------------- In this subsection we describe data format specific for FastNLO implementation. The program included FastNLO Toolkit for the new format tables (v. 3.2+). The old FastNLO table format can be still accessed with the help of APPLGRID (this is not tested in the xFitter enviroment though). The reader supports both flexible and non-flexible scales tables. For flexible tables, scales can be defined through the CInfo mechanism in the data file. Below more details on different data file variables are given. Reaction - for the fastnlo jet cross sections this should be 'FastNLO ep jets' or 'FastNLO ep jets normalised'. The latter refers to jet cross sections normalised to inclusive DIS cross sections (definition of the normalisation phase space needs to be done for each data point, see the 'ColumnName' field). ColumnName - There are some specific names that are recognised internally by the code: 'Z0Corr': Allows to inform the program of the size of the Z0 exchange correction. If it is given, each point calculated by the FastNLO code will be multiplied by the Z0Corr value. 'NPCorr': Allows to inform the program of the size of the non-perturbative correction. If it is given, each point calculated by the FastNLO code will be multiplied by the NPCorr value. Z0Corr and NPCorr can be added simultaneously, and in this case the calculated cross sections will be multiplied by the product of (Z0Corr*NPCorr). 'q2min', 'q2max', 'ymin', 'ymax', 'xmin', 'xmax': These can be used to define DIS phase space for the normalisation used in the 'FastNLO ep jets normalised' case. Out of these three (q2, y, x) exactly two sets should be defined to inambiguisly define the DIS phase space. CInfo, DataInfo - Following info fields are required to calculated desired cross sections (some can be ommited for 'FastNLO ep jets normalised' case): 'PublicationUnits': Output of the FastNLO code can be given in units used in the relevant publication table or in a standarized units. To use publication units one needs to set PublicationUnits to 1. In order to use absolute units, it needs to be set to 0. 'MurDef', 'MufDef': Here user can define the scale definition used by the FastNLO code for variable scale tables. The renormalisation scale (MurDef) and factorisation scale (MufDef) definitions can be set independently. The required value follows the FastNLO standard and should be equal to : 0 : mu^2 = Q^2 1 : mu^2 = pt^2 2 : mu^2 = ( Q^2 + pt^2 ) 3 : mu^2 = ( Q^2 + pt^2 ) / 2 4 : mu^2 = ( Q^2 + pt^2 ) / 4 5 : mu^2 = (( Q + pt ) / 2 )^2 6 : mu^2 = (( Q + pt ))^2 7 : mu^2 = max( Q^2, pt^2) 8 : mu^2 = min( Q^2, pt^2) 9 : mu^2 = (scale1 * exp(0.3 * scale2)) ^2 'lumi(e-)/lumi(tot)': This needs to be defined for 'FastNLO ep jets normalised' option. The normalisation depends on the ratio of the e+ and e- data used to calculate the cross sections. This ratio should rather be given in a format (lumi{e-} / (lumi{e-} + lumi{e+}) and assume values between [0. 1.]. 'UseZMVFNS': Should be defined for 'FastNLO ep jets normalised'. The calculation of the integrated inclusive DIS cross sections could be time consuming. This option provides an opportunity to use a "Zero Mass Variable Flavour Number Scheme" approximation which is very fast and possibly provides enough precision for the normalisation purposes. ZMVNS is used if 'UseZMVFNS'=1. If 'UseZMVFNS'=0., the same scheme as defined in a global steering.txt file in the variable 'HF_SCHEME' TheoryInfoFile - Should be a path to a FastNLO table in a version 2.0+ TheoryType - Should be set to 'FastNLO' d) Minuit cards -------------------------- The minuit card contains the list of parameters used in the fits. The default card (minuit.in.txt) located in the trunk is linked to the STANDARD PARAMETRISATION form as used for HERAPDF2.0 (14 free parameters). STANDARD PARAMETRISATION has the form: A * x**B * (1 - x)**C * (1 + D *x + E * x**2 + F * x**3) - Ap * x**Bp * (1 - x)**Cp and it parametrises the following PDFs: uval, dval, Ubar(=ubar+cbar), Dbar(=dbar+sbar), gluon Other optional minuit cards are stored in the input_steering/: - CTEQ minuit card - CTEQHERA - hybrid: valence like CTEQ, rest like HERAPDF - CHEBYSHEV minuit card: uval, dval, Sea(=Ubar+Dbar), gluon - BiLog - bi-lognormal parametrisation - DIFFRACTION - parametrisation optimised for fits with diffractive DIS data - DIPOLE for dipole model fits (fixing all or all but gluon PDFs) - GENETIC - switches on the multi solution finding tool - kt-factorisation - parametrisation for uPDF fits IMPORTANT: Make sure that choosen minuit.in.txt corresponds to your selection in the steering.txt Explanation of the minuit.in.txt format: set title new 13p HERAPDF parameters 1 'Ag' 0.0000 0. 2 'Bg' -0.061953 0.027133 ..... - The first 3 lines set title and announces MINUIT the list of parameters - The index of parameters is the first column and it is hardwired to the source code. 1 -10 gluon parameters 11-20 uval parameters 21-30 dval parameters 31-40 Ubar parameters 41-50 Dbar parameters 51-60 U parameters 61-70 D parameters 71-80 Sea parameters 81-90 Delta parameters - second column represents just user defined names - third column: input value for the parameter - forth column: step size (usually chosen of the same order as of the error) IMPORTANT: -> if step size value is 0. then this parameter is FIXED - fifth colum: lower boundary of the fit parameter - sixth column: upper boundary of the fit parameter -> if boundaries are not mentioned then there are no boundaries! Only parameters that have the step size non-zero are let to vary in the fit (free parameters) Another way to fix the parameters is simply by typing at the end of the list of parameters (make sure there is one line free between): FIX 10 --> this one fixes parameter 10 Commands taken by minuit: call fcn 3 -> fit is not performed, only 1 iteration, useful for testing Minuit parameters ARE NOT minimized migrad -> fit is performed (default number of calls 2000). migrad 20000 -> fit is performed up to 20000 calls, then terminates. hesse -> hessian estimate of the MINUIT parameters (more reliable than MINUIT) - The output of the fit is stored in the output/ directory: minuit.out.txt Statements to watch in minuit.out.txt: FCN= 575.16 -> this is total chisquare FROM MIGRAD STATUS=CONVERGED -> this is desirable for a fit that converged FCN= 575.16 FROM HESSE STATUS=OK --> this is desirable for a fit that converged and errors estimated with HESSE method EDM= 0.12E-04 STRATEGY= 1 ERROR MATRIX ACCURATE Additional Option that works only with ./configure --enable-genetic genetic (for details please see below) d.a) GENETIC tool -------------------------- Genetic option in MUNUIT card is useful when one needs to assure that the MINUIT has found a global minima and not a local one. Once activated, this option will initialise the scan of the parametrisation parameters and will store multi-solutions found in the output directories named output/genetic.* An example of the minuit.in.txt is available in input_steering/ directory (minuit.in.txt.GENETIC). NOTE: due to time constraints it is recomended to use RT FAST or ZMVFNS scheme when using this option. e) Applying cuts -------------------------- The namelist &Cuts, located inside the steering.txt file can be used to apply simple process dependent cuts. The cuts are limitted to bin variables. Simple low and high limits are allowed. For example, a cut on Q2>3.5 for NC ep scattering is specified as ! Rule #1: Q2 cuts ProcessName(1) = 'NC e+-p' Variable(1) = 'Q2' CutValueMin(1) = 3.5 CutValueMax(1) = 1000000.0 Maximum 100 cuts can be used by default. f) Choosing the heavy flavour scheme -------------------------- Several schemes are available for heavy quarks: -VFNS (Variable Flavour Number Schemes): RT-VFS [from Robert Thorne] ZMVFNS [qcdnum] ACOT (ACOT-Full, ACOT-ZM, S-ACOT-Chi) [from Fred Olness] FONLL [as implemented in APFEL] -FFNS (Fixed Flavour Number Scheme) [qcdnum] also available in ABM (openqcdrad-2.0b4) [from Sergey Alekhin] IMPORTANT if running with FFNS (nf=3): - only neutral current DIS data should be used in FF scheme due to missing NLO coefficient functions in charged current (W+c) process, valence quarks in this case should to be fixed in minuit.in.txt file In FF ABM implementation the charged current coefficients are available therefore valence parameters do not need to be fixed - alpha_s(Q2) in FFNS is 3-flavour and recommended to be set to value of 0.105 such that is not too high at low energies - the scale in FFNS is defined as mu^2 = Q^2 + 4m_h^2 by default, can be changed in HQScale in steering.txt (scale variation in ABM not yet implemented) - the pole mass definition for heavy quarks is set in ABM by default, the running mass definition (arXiv:1011.5790v1) can be switched in by setting HF_SCHEME = 'FF ABM RUNM' in steering.txt g) Understanding the output ------------------------------ The results of the minimization are printed to the standard output and written to the files in the output/ directory (name of the directory can be changed to other than the default in the steering.txt). The quality of the fit can be judged based on total chi2 per degrees of freedom. It is printed for each iteration as 1 1363.45 1131 1.21 =========== Calls to fcn= IfcnCount 2 uv: 5.5480 0.8105 4.8235 0.0000 9.9214 0.0000 0.0000 0.0000 0.0000 0.0000 dv: 6.2834 1.0300 4.8463 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 Ub: 0.1613 -0.1273 7.0597 1.5481 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 Db: 0.2688 -0.1273 9.5862 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 GL: 2.2719 -0.0620 5.5624 0.0000 0.0000 0.0000 0.1661 -0.3831 25.0000 0.0000 ST: 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 The resulting chi2 is reported for each data set and for correlated systematic uncertainties separately. This information is printed and written to the output/Results.txt file. The Results.txt file contains additional information about shifts of the correlated systematic uncertainties. The minimization information from the minuit is stored in the output/minuit.out.txt file. The verboseness level of this information can be changed by minuit commands in the minuit.in.txt file. Make sure that the minuit does not report any errors or warnings at the end of minimisation. Point by point comparison of the data and predictions after the minimization is provided in output/fittedresults.txt file. The file reports three columns corresponding to three first bins of the input tables, data value, sum in quadrature of statistical and uncorrelated systematic uncertainty, total uncertainty, the predicted value, after applying correlated systematic shifts, pull between data and theory (calculated as (data-theory)/uncorrelated_error), data set index. Similar information is stored in pulls.first.txt and pulls.last.txt ( dataset index, first bin, second bin, third bin, data, theory, pull), however theory is not adjusted for systematic error shifts in this case. The output PDFs are stored in forms of tables in output/pdfs_q2val_XX.txt files. Each of the files reports values of gluon, and quark PDFs as a function of x for fixed Q^2 points. The Q^2 values and x grid are specified by &Output namelist in the steering.txt The PDF information and data to theory comparisons can be plotted using the bin/xfitter-draw program. The program requires the fit output directory as an argument. Calling the program with more directories as arguments provides comparison of the PDFs obtained in the various fits. For a full list of the available options of xfitter-draw please type: bin/xfitter-draw --help Finally, the xFitter package provides PDFs in LHAPDF format (version 5 and 6). To obtain the LHAPDF5.X grid file, run tools/tolhapdf.cmd script. The script produces PDFs.LHgrid file which can be read by lhapdf version lhapdf-5.8.6.tar.gz or later. The LHAPDF6.X version grids are produced atomatically in xfitter_pdf directory h) PDF type ------------------------------ Currently there are two PDF types which can be fitted in xFitter: 'proton' for fitting proton data and 'lead' for fitting lead data (cannot be used in the combination with proton data). The PDF type is set in the steering.txt with a flag PDFType: PDFType = 'proton' i) parametrisation style ------------------------------ There are various types of parametric functional form supported by xFitter. They are accessed via the steering flag called PDFStyle: PDFStyle = 'HERAPDF' The following options can be selected from the steering.txt with a predefined string: 'HERAPDF' -- HERAPDF-like with uval, dval, Ubar, Dbar, glu evolved pdfs 'CTEQ' -- CTEQ-like parameterisation 'CTEQHERA' -- Hybrid: valence like CTEQ, rest like HERAPDF 'CHEB' -- CHEBYSHEV parameterisation based on glu,sea, uval,dval evolved pdfs 'LHAPDFQ0' -- use lhapdf library to define pdfs at starting scale and evolve with local qcdnum parameters 'LHAPDF' -- use lhapdf library to define pdfs at all scales 'DDIS' -- use Diffractive DIS 'BiLog' -- bi-lognormal parametrisation j) Options for the chi2 choice: ------------------------------ The form of the chi2 function in xFitter is based on nuisance parameters or the covariance matrix. The form and the scaling properties of the uncertainties are controlled globally by the CHI2SettingsName and Chi2Settings variables: CHI2SettingsName = 'StatScale', 'UncorSysScale', 'CorSysScale', 'UncorChi2Type', 'CorChi2Type' Chi2Settings = 'Poisson' , 'Linear', 'Linear' , 'Diagonal' , 'Hessian' Variables 'StatScale', 'UncorSysScale' and 'CorSysScale' allow to chose different scaling rules for statistical, uncorrelated and correlated systematic uncertainties, 'UncorChi2Type' and 'CorChi2Type' selects the treatment of the systematic uncertainties (e.g. Hessian, Matrix or Offset method can be chosen for the correlated systematics). Extra corrections can be applied via Chi2ExtraParam flag (they are set off by default) Chi2ExtraParam = 'PoissonCorr' ! 'PoissonCorr' : extra log correction accounting for changing uncertainties ! 'FirstIterationRescale' : re-scale uncertainties at the first iteration only ! 'ExtraSystRescale' : additional re-scaling of stat. uncertainty to account for syst. shifts. ------------------------------ 3) FITTING uPDF (TMD) ===================== ************************************************* * fitting uPDF (TMD) gluon to HERA data * * using the CASCADE framework * * H. Jung (DESY) * * [email protected] * ************************************************* 0. set environment variables (please see INSTALLATION file) and run ./configure --enable-updf --enable-lhapdf --enable-checkBounds NOTE: by default uPDF code uses cteq66 PDFs for the starting distribution for the valence quarks (Cascade/src/evolve_tmd.F), please make sure you have it downloaded and linked. 1. use steering and minuit input files from "input_steering": cp input_steering/steering.txt.kt-factorisation steering.txt cp input_steering/minuit.in.txt.kt-factorisation minuit.in.txt cp input_steering/steer-ep-CASCADE steer-ep cp input_steering/steer_gluon-evolv steer_gluon-evolv 2. edit steering.txt: &CCFMFiles: give name for output grid file for uPDF.&xFitter &xFitter TheoryType = 'uPDF4' | fit calculating kernel on fly, grid of sigma_hat all other parameters are standard 3. run the program: bin/xfitter 4. plotting F2 fit results: bin/xfitter-draw output ! will draw F2 results 4) USING NNPDF REWEIGHTING PROGRAM ===================== * *************************************************************** * * NNPDF subpackage - Reweighting program of NNPDF fitting group * * * * Description of NNPDF method to create NNPDF PDF sets: * * arXiv:1002.4407 [hep-ph] * * * * Description of reweighting method: * * arXiv:1012.0836 [hep-ph], * * arXiv:1108.1758 [hep-ph] * * * * [email protected] * * [email protected] * * *************************************************************** * Running NNPDF reweighting 0) General NNPDF philosophy --------------------------------- The NNPDF collaboration releases PDF sets consisting of 100 or 1000 PDF replicas, whose mean prediction for a given observable corresponds to the central NNPDF prediction and the RMS of those replicas for the observable is the NNPDF error. The NNPDF reweighting calculates the chi2 between a new data set and the old NNPDF replicas in order to determine which replicas are still able to describe the new data (they are kept) and which ones fail (they are thrown out). The output of the procedure is a new, updated NNPDF set in LHAPDF format with a reduced number of replicas that describe the old and the new data well. Some additional check plots which give clues about the validity of the procedure for the given new data set are also provided. 1) RUNNING the NNPDF reweighting --------------------------------- In order to use the reweighting technique, first the LHAPDF library has to be installed and linked as described in the INSTALLATION file. NOTE: reweighting currently is working with LHAPDF6.1.1 (or higher) version only! First, in the xFitter steering files, as RunningMode the following parameter has to be chosen: 'LHAPDF Analysis'. This will write out the following files into the output directory: NNPDF-style PDFs: pdf BAYweights.dat (Bayesian) and pdf GKweights.dat (Giele-Keller reweighting) Hessian PDFs: pdf vector cor.dat, pdf shifts.dat, pdf rotate.dat (can be used to either perform reweighting or profiling, for more details please see Manual) a) To get the results as LHAPDF files, the xfitter-process has to be run in order: bin/xfitter-process reweight <number_output_replicas> <pdf_weights> <pdf_dir_in> <pdf_dir_out> where <number_output_replicas> is the number of PDF sets that the replica should contain after the reweighting, <pdf_weights> refers to BAYweights.dat or GKweights.dat output files, <pdf_dir_in> is the directory of the input PDF set, <pdf_dir_out> is the directory of the output PDF set. two checks plots are automatically created when running the reweighting: ./weights.pdf --> weight distributions (used in the reweighting procedure - replicas with high weights are kept, low weight replicas are thrown out) ./palpha.pdf (only for Bayesian weighting) --> distribution of the probability, that the uncertainties of the new data should be re-scaled by a factor of alpha. The rescaling factor alpha should therefore ideally be 1. It is essentially a measure of the compatibility of the new data with the old data (it should be around around 1, if it is larger than that, say around 1.7, then then the new data are incompatible with the ones included in the fit - 0.5 for example however suspiciously good). b) To plot the results as comparions with the input data, the bin/xfitter-draw program can be run just as for the other fits, e.g. using the command: bin/xfitter-draw reweight-BAY:output:"BAYreweighted" reweight-GK:output:"GKreweighted" 5) DESCRIPTION OF DiffDIS PACKAGE FOR THE DIFFRACTIVE FIT TO DIS ===================== General description --------------------------------- Diffractive DIS data are fitted within the 'proton vertex factorisation' approach where the diffractive DIS is mediated by the exchange of hard Pomeron and a secondary Reggeon. The model was used in previous HERA fits, see e.g. 1. ZEUS Collaboration, S. Chekanov, et al., Nucl. Phys. B 831 (2010) 1. 2. H1 Collaboration, A. Aktas, et al., Eur. Phys. J. C 48 (2006) 715. The model supplied by the DiffDIS package provides values of the 'reduced cross section', sigma_r = F2 - y^2/(1+(1-y)^2) FL which is expected to be the experimentally meausured quantity. (Actually, the ZEUS data files ZEUS-LPS_2009.dat and ZEUS-LRG_2009.dat contain xPom*sigma_r.) The structure functions F2 and FL are calculated at NLO with heavy quarks treated according to the Thorn-Roberts GM-VFNS. Relevant formulae and notation can be found in the above mentioned papers and in the attached diffit.pdf file. The Eqs. numbers in the following correspond to the latter. F2 and FL are calculated from DPDFs given by Eq. (18). The Reggeon PDFs, f^R are taken as those of GRV pion. The fluxes are given by Eqs. (9,10) and they require following parameters, defined in plug_DDIS.h: Flux_tmin, Flux_tmax -- t limits for the integrated flux Pomeron_tslope -- Pomeron flux t-slope (b) Pomeron_a0 -- Pomeron intercept Pomeron_a1 -- Pomeron slope Reggeon_tslope -- Reggeon flux t-slope (b) Reggeon_a0 -- Reggeon intercept Reggeon_a1 -- Reggeon slope Reggeon_factor -- A_R of Eq. (10a) The values of these parameters are predefined in plug_DDIS.h and can also be read from the DDIS.coca file. A_P of Eq. (10a) is set to 1 --- it is absorbed into the initial Pomeron parametrization, Eq. (19). Example run --------------------------------- This example reproduces the ZEUS-C fit results of Ref. [1]. Here the fitted parameters include: -- A_i of Eq. (19) for the gluon and light quarks --- they correspond to Ag, Bg, Cg and Auv, Buv, Cuv of the minuit.in.txt file, -- Pomeron_a0, Reggeon_a0 and Reggeon_factor --- they are declared and initialized in the 'ExtraMinimisationParameters' section of the steering.txt file. In order to reproduce the original results the ewparam.txt file is modified to contain the appropriate heavy quark masses. The three above mentioned files are stored as input_steering/minuit.in.txt.DIFFRACTION input_steering/steerig.txt.DIFFRACTION input_steering/ewparam.txt.DIFFRACTION and must be copied to minuit.in.txt, steerig.txt and ewparam.txt, respectively, before running the program. ====================================
About
No description, website, or topics provided.
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published