Ad a247 186

download Ad a247 186

of 72

Transcript of Ad a247 186

  • 8/14/2019 Ad a247 186

    1/72

    AD-A247 186-NAVAL POSTGRADUATE SCHOOL

    Monterey, California

    THESISCOMPARISON OF TANK ENGAGEMENT RANGESFROM AN OPERATIONAL FIELD TESTTO THE JANUS(A) COM]BAT MODEL

    byAllen C. East

    September, 991ThesisAdvisor: Donald R. arrCo-Advisor: Captain Dennis D. undy, SA

    Approved for public release; distribution is unlimited

    92-0501692 2 26 032 liIII li

  • 8/14/2019 Ad a247 186

    2/72

    SECURITY CLASSIFICATION OF THIS PAGEREPORT DOCUMENTATION PAGE

    Ia. REPORT SECURITY CLASSIFICATION Ilb. RESTRICTIVE MARKINGSUnclassified ______________________2a. SECURITY CLASSIFICATION AUTHORITY 3.DISTRIBUTION/AVAILABILITY OF REPORT

    Approved for public release; distribution is unlimited.2b. DECLASSIFICATION/DOWNGRADING SCHEDULE4. PERFORMING ORGANIZATION REPORT NUMBER(S) 5.MONITORING ORGANIZATION REPORT NUMBER(S)

    6a. NAME OF PERFORMING ORGANIZATION 6b. OFFICE SYMBOL 7a . NAME OF MONITORING ORGANIZATIONNaval Poetgradite School It i pplicablie)6c . ADDRESS (City, State, and ZIP Ctde) 7b . ADDRESS (City, State, and ZIP Code)Monterey, CA 93943.5000

    Ba. NAME OF FUNDING/SPONSORING Sb. OFFICE SYMBOL 9-PROCUREMENT INSTRUMENT IDENTIFICATION NUMBERORGANIZATION j(If pplicable)Sc. ADDRESS (City, State, and ZIP Code) 10 . SOURCE OF FUNDING NUMBERS

    ProgramniEww"t No j Profed NO 7ask No Work UMlAanion11. TITLE (include Security Classification)Comparison of Tank Engagement Rtanges from an Operational Field Test to the JanuetA) Combat Model

    The views ezpremsd in this thesis are those of the author and do not reflect the official policy or position of the Department ofDefense or the U.SGovernment.17 . COSATI CODES I1S. SUBJECT TERMS (continue on reverse ifnecessary and identify byblock number)-FIELD GROUP SUBGROUP Operational Field TestJanue(A) Combat Model, Model-Test-Model

    19. ABSTRACT (continue on reverse ffnecessaryand identify by block number)The purpose of this thesis is to analyze the feasibility of accrediting the JanuelA) combat model for the post-test modeling phase ofan Armyconcept called Model-TestModeL Specifically, tank engagement ranges collected from an operational field tostare compared to sim ilarranges generated by simulation of the test n the Janue(A) combat model. An automated process is developed to convert position location datafrom the field test into Janu(A) format so that the simulation replicates the vehicles and routes used in the test, Means and distributionsoftank engagement ranges are studied. The important conclusion of his thesis is that Janus(A) generates engagement ranges longer than thoseobserved in the operational field test. Additionally, collection of est data must be improved for Janue(A) to be accredited for engagementm g analysis of field tests.

    20. DISTRIBUITIONIAVAILABILITY OF ABSTRACT 21.ABSTRACT SECURITY CLASSIFICATIONMCLASSWUgEANLOITE 13AME AS EPOAT [3TIC USERS IUnclassified22s. NAME OF RESPONSIBLE INDIVIDUAL 22b. TELEPH4ONE (include Area code) 22c. OFFICE SYMBOLDenald Liear (408)646-2763 MA/BaDD FORM 1473.84 MAR 63 APR eition may beamsed untilexhausted SECURITY CLASSIFICATION OF TISPEAll other editions are obsolete Unclassified

  • 8/14/2019 Ad a247 186

    3/72

    Approved for public release; distribution is unlimited.Comparison of Tank Engagement Ranges

    from an Operational Field Testto the Janus(A) Combat Model

    byAllen C. East

    Captain, United States ArmyB.S., United States Military Academy, 1981

    Submitted in partial fulfillmentof the requirements for the degree of

    MASTER OF SCIENCE IN OPERATIONS RESEARCHfrom the

    NAVAL POSTGRADUATE SCHOOLSeptember 1991

    Allen C. East

    Approved by - z6L" 4 &"Donald R. Barr, Thesis Advisor

    LIitake d Reader

    Purdue, ChairmanDepartment of Operations Research

    ii

  • 8/14/2019 Ad a247 186

    4/72

  • 8/14/2019 Ad a247 186

    5/72

    THESIS DISCLAIMER

    The reader is cautioned that computer programs developed in this research may nothave been exercised for all cases of interest. While every effort has been made, withinthe time available, to ensure that the programs are free of computational and logicerrors, they cannot be considered validated. Any application of these programswithout additional verification is at the risk of the user.

    iv

  • 8/14/2019 Ad a247 186

    6/72

    TABLE OF CONTENTS

    I INTRODUCTION . . . . . . . . . . . . . . . . . . . 1A. GENERAL . . . . . . . . . . . . . . . . . . . 1B. DEFINITION OF ACCREDITATION ...... . . .. 2C. MODEL-TEST-MODEL (M-T-M) CONCEPT . . .... 2

    1. Phase I (Pre-Test Modeling) .... ........ 32. Phase II (Field Test) ... . . . . . . 33. Phase III (Post-Test Modeling) . . .... 4

    a. Compare ... . . . . . . . . . . . 4b. Improve . . . . . . . . . . .. 4

    (I1)Mode . . . . . . . . . . . . . . .. . 4(2) Test . . . . . . . . . . . . . . . . . 5

    c. Expand . . . . . . . . . . . . . . . . . 6D. MODELING TANK ENGAGEMENTS .......... 6

    1. Dependency of Multiple Engagements . . . . . 62. Inter-Fire Time (IFT) ........... 73. First Range of Engagement (FRE) . . . . . . 8

    II. OPERATIONAL FIELD TEST.... . . . . .. . . .. 9A. BACKGROUND ............ . . . . 9B. TEST DATA . ................. 10

    1. Trial Selection . . . . . . . . . . . . . . 102. Data Collection .............. 10

    v

  • 8/14/2019 Ad a247 186

    7/72

  • 8/14/2019 Ad a247 186

    8/72

    E. DATA COLLECTION ....................... 26

    IV. COMPARISON OF FIRST RANGE OF ENGAGEMENTS (FRE's) . 28A. GENERAL . . . . . . . . . . . . . ........ 28B. ASSUMPTIONS . . . . . . . . . . . ........ 28

    1. Independence Within Each Sample ...... 282. Independence Between Samples . . . . .... 29

    C. COMPARISON OF MEANS . ............ 29D. COMPARISON OF DISTRIBUTIONS . . . . . . .... 33

    1. Kolmogorov-Smirnov (K-S) Tests . . . . 332. Empirical Quantile-Quantile Plots ..... . 34

    a. Method of Construction . . . . . . . .. 34b. Interpretation . . .............. 35

    E. ANALYZING THE DIFFERENCES. . . . . . . . . . . 391. Range Versus Time ............. 392. Number of Rounds Fired in a Multiple

    Engagement . . . . . . . . . . . . . . . . . 42a. Field Test Data . . . . . . . . . . . . 42b. Janus Data . . . . . . . . . . . . . . . 44

    V. CONCLUSION AND RECOMMENDATIONS . . . . . . . . . . 46A. CONCLUSION ............ . . . .. 46B. RECOMMENDATIONS . ............. 46

    1. Field Test . . . . . . . . . . . . o . . . . 462. Janus . . . . . . . . . . . . . . . . . . . 47

    vii

  • 8/14/2019 Ad a247 186

    9/72

    APPENDIX A: FORTRAN CONVERSION PROGRAM . . . . . . . . 48

    APPENDIX B: LIST OFACRONY.. . . . .. .. . ... 58

    REFERENCES . . . . . . . . . . . . . . . . . . . . . . 59

    INITIAL DISTRIBUTIONLIST . .. . . . . . . . . . . . . 60

    viii

  • 8/14/2019 Ad a247 186

    10/72

    LIST OF TABLES

    Table I TRIALS SELECTED. . . . . . . . . . . . . . 10Table II MIAI ENGAGEMENT SUMMARY ... .......... 14Table III FST ENGAGEMENT SUMMARY ... .......... 15Table IV CFV ENGAGEMENT SUMMARY .......... 16Table V BMP ENGAGEMENT SUMMARY .......... 17Table VI MIAl SAMPLE CORRELATIONS . .... . . .. 18Table VII MEAN COMPARISONS .. . . . .. . ..... 30Table VIII DISTRIBUTION COMPARISONS . . . . . .... 34Table IX CHI-SQUARE GOODNESS OF FIT TEST ...... 43

    ix

  • 8/14/2019 Ad a247 186

    11/72

    LIST OF FIGURESFigure 1 Diagram of Engagement Process . . . . . 8Figure 2 Conversion Process ............. 22Figure 3 Power Curve (M1 vs FST) . . . . . . . . ... 32Figure 4 Power Curve (Ml vs BMP) . 32Figure 5 Power Curve (FST vs Ml) . . . . ........ 33Figure 6 Q-Q Plot of pooled M1 vs FST FRE's (meters) 37Figure 7 Q-Q Plot of pooled M1 vs BMP FRE's (meters) 37Figure 8 Differences Between Ranges (meters) . . ... 38Figure 9 Q-Q Plot of FST vs M1 FRE's (meters) . ... 38Figure 10 Range versus Time - Test . . . . . . . ... 41Figure 11 Range versus Time - Janus . . . . . . ... 41

    x

  • 8/14/2019 Ad a247 186

    12/72

    I. INTRODUCTION

    A. GENERALComparisons between operational field tests and high

    resolution combat models serve a dual purpose. Theoperational testers can save millions of dollars due toimproved planning and evaluation provided by the judiciousapplication of the models. The modeling community can enhanceand improve their models based on real world occurrences in afield test. The Army's Model-Test-Model (M-T-M) concept isthe integration of operational field tests and combat models.This thesis contains a report of our effort to analyze thefeasibility of accrediting the Janus(A) high resolution combatmodel for the post-test modeling of tank engagement ranges.

    We used data from the Line Of Sight-Forward-Heavy InitialOperational Test (LOS-F-H IOT) conducted at Fort HunterLiggett, California from April through May 1990. First, anautomated process to replicate the field test vehicle routesin Janus(A) was developed. Second, the test data wereanalyzed to see if the data were adequate to support acomparison. Third, where appropriate, comparisons wereconducted between the test data and model output. Appendix Bcontains a list of acronyms used in this thesis.

    1

  • 8/14/2019 Ad a247 186

    13/72

    B. DEFINITION OF ACCREDITATIONBefore an assessment of the feasibility of accrediting

    Janus(A) for analysis of tank engagement ranges is made, weneed to define accreditation. According to a memorandumsigned by Mr. Walter W. Hollis, Deputy Under Secretary of theArmy (Operations Research), on October 30, 1989, accreditationis "Certification that a model is acceptable for use for aspecific type(s) of application(s)." [Ref. 1] Based on thisdefinition, the thrust of this thesis is to determine ifJanus(A) is acceptable for use for the specific application ofmodeling tank engagements that occur at Fort Hunter Liggettduring a field test. In other words, can we rely on Janus(A)to reasonably approximate tank engagement ranges that occur atFort Hunter Liggett during a field test? Validation is "Theprocess of determining that a model is an accuraterepresentation of the intended real-world entity from theperspective of the intended use of the model." [Ref. 1]Validation of Janus(A) is beyond the scope of this thesis.However, our work to accredit Janus(A) supports the Army goalof validating Janus(A).

    C. MODEL-TEST-MODEL (M-T-M) CONCEPTAlthough the M-T-M concept has been used in the Army,

    there is no formal Army publication that clearly defines theconcept. US Army Training and Doctrine Command (TRADOC) atWhite Sands Missile Range, New Mexico and Research Analysis

    2

  • 8/14/2019 Ad a247 186

    14/72

    and Maintenance, Inc. have conducted most of the work inM-T-M. In October 1990, Mr. Hollis asked TRADOC Test andExperimentation Center (TEC) at Fort Hunter Liggett,California to work to improve the M-T-M methodology [Ref. 2].TEC's first steps were to enlist the support of TRADOCAnalysis Command (Monterey) and to jointly define the conceptas applied to operational testing. Their joint briefing formsthe basis for our description of the M-T-M concept, and is ourguiding definition of the concept [Ref. 3].

    The three phases of the M-T-M concept are pre-testmodeling, field test, and post-test modeling. Since the focusof this thesis is on post-test modeling, the first two phasesare described only briefly.

    1. Phase I (Pre-Test Modeling)The goal in this phase is to use a model to help plan

    the test design, including recommending scenarios for actualuse on the ground. Although this should never replace on-the-ground planning, the modeler could save many valuable hours byconducting simulations with different force sizes, scenario3,and tactics. The modeler could make recommendations forimproving the test design, in terms of measures of performancerelevant to the tester.

    2. Phase II (Field Test)The initial phase of a field test is usually the Force

    Development Test and Evaluation (FDTE). In this phase the

    3

  • 8/14/2019 Ad a247 186

    15/72

  • 8/14/2019 Ad a247 186

    16/72

    be to adjust the model to replicate the conditions of thefield test, not to tweak the model just to generate similaroutputs. The modeler should make adjustments until he issatisfied that the model's input parameters mirror the testconditions. If the outputs from the test and model are stilldifferent, he should focus on the terrain database andalgorithms within the model for possible improvements.

    (2) Test. It may be a mistake to conclude thatthe model is in error just because the model and test outputsare different. Even though the tester takes great care toinsure he records all measurements accurately; the inherentnature of trying to record physical occurrence on thebattlefield is extremely complex. The resulting data are notwithout flaw. Due to a large amount of human factors andinstrumentation problems, the data may be unsuitable forcomparison with output from a model. If the test data areseriously flawed, the tester might attempt to reconstruct someof the data. He should review the procedures used to collectthe data during the test and improve them for future trials.We should note that there are varying degrees of simulationduring the field test itself. For example, TEC uses randomnumbers to decide whether a vehicle engaged by another vehicleis "killed". If we were to compare the number of killsbetween the test and Janus(A), we should be aware that we arecomparing two simulations. Additionally, due to the factors

    5

  • 8/14/2019 Ad a247 186

    17/72

    described above, most of the test measures of effectivenesshave large variances. These large variances may lead theanalyst into accepting null hypotheses of no difference witha very low degree of power.

    If the outputs from the model and test are similar,it may be feasible to accredit the model for a specificapplication. It may also be feasible to extrapolate the testevaluation by running the model with force structures andscenarios that could not be finished during the field test dueto resource or testing constraints. Extrapolation of testresults becomes less reliable the greater the changes to theforce structure and scenarios.

    D. MODELING TA M ENGAGEMENTS1. Dependency of Multiple Engagements

    We define a multiple engagement as the tank firing atthe same target within a specified time interval. This timeinterval is defined in subsection two. We have observed thatranges in a multiple engagement are related. The cause ofthis dependence can be attributed to the spatial relationshipbetween the firer and the target; both the firer and target donot move very far during a multiple engagement. The ranges ina multiple engagement are thus statistically dependent. Weshow statistical evidence of this dependence in C.apter 37.

    6

  • 8/14/2019 Ad a247 186

    18/72

    We model the times between rounds in a multiple engagement andcall them the inter-fire times.

    2. Inter-Fire Time (IFT)The tank engagement process is extremely complex. The

    following model is a simplification of this process to helpdefine the inter-fire time (IFT).

    As a tank traverses the battlefield, the tank commander issearching for targets. Once he identifies a target, he laysthe gun tube onto the gunner's sight picture (lay time). Thegunner aims at the target by placing the crosshairs of thesight onto the target (aim time). The gunner fires a round atthe target. Time of flight (TOF) is the time it takes for theround to reach the target area. The tank commander thendecides if the target was destroyed (assess time). Forsubsequent rounds fired at the same target, the gunner aims atthe target while the loader reloads the gun. The equation

    IFT-a (AIM, RELOAD) +TOF ASSESSdefines the time between firings in a multiple engagement.For subsequent rounds in a multiple engagement, lay time iszero because we assume the gunner already has the target inhis sight picture. The IFT includes the maximum of aim andreload time because these two events occur simultaneously onsubsequent engagements.

    7

  • 8/14/2019 Ad a247 186

    19/72

  • 8/14/2019 Ad a247 186

    20/72

    II. OPERATIONAL FIELD TEST

    A. BACKGROUNDThe Line-of-Sight Forward Heavy (LOS-F-H) is an air

    defense system armed with surface-to-air missiles mounted ona modified Bradley Fighting Vehicle chassis. The mission ofthe LOS-F-H is to defend Army heavy divisions against airattack. The purpose of the LOS-F-H IOT was to determine theoperational effectiveness of a LOS-F-H platoon to accomplishits mission. TEC conducted the test at Fort Hunter Liggett,California from April 9 to May 23, 1990. The test consistedof 50 trials. Each trial, approximately one hour in length,was a force-on-force battle between instrumented Blue and Redmechanized forces of approximately battalion strength. Theblue mechanized forces consisted of four LOS-F-H vehicles, 14Abram Main Battle Tanks (MIAl), and 15 Cavalry FightingVehicles (CFV). The red mechanized forces consisted of 14Future Soviet Tanks (FST) and 10 armored personnel carriers(BMP). The M60A3 main battle tank and the M113 armoredpersonnel carrier were surrogates for the FST and BMP,respectively. In addition to the ground forces, there werevarious types of helicopters and aircraft on both sides.For more information on the field test see [Ref. 4].

    9

  • 8/14/2019 Ad a247 186

    21/72

    B. TEST DATA1. Trial Selection

    The test consisted of 50 trials conducted in fourscenarios. The blue force mission was either defense facingNorth, defense facing South, offense facing North, or offensefacing South. The red force conducted the opposite missionaccordingly.

    An analysis of all 50 trials is beyond the scope ofthis thesis. To minimize the effects of systematically variedconditions on engagement ranges, six trials that occurredunder the same conditions, shown in Table I, were selected.

    Table I TRIALS SELECTEDTrials Selected

    L099B,LlOOBLIl2B,L122B,L123B,L125BFactor ConditionLight DayBlue Mission DefenseBlue Orientation NorthChemical Equip. NoneSmoke None

    2. Data CollectionTEC provided two types of data, position location

    (PLS) and battle (BTL) files, for each of the six trials

    10

  • 8/14/2019 Ad a247 186

    22/72

    selected. The PLS file contained a record of each vehicle'slocation for every second during the trial. The PLS file wasused to create the vehicle routes in Janus (A). This procedureis described in Chapter III. The BTL file contained a recordof each engagement between vehicles, including the firer,target, and associated engagement range, if available. Thesedata were compared with the output from Janus(A).

    3. Data Limitationsa. Position Location Errors

    There are errors associated with position locationdata. These errors have an impact on vehicle routes andengagement ranges. If the position location is inaccuratelyrecorded, the recorded vehicle route could be quite differentfrom the actual. Additionally, engagement ranges arecalculated from the position location data using the formula

    Range=V (Xfiz.rXtatpg) 2+ (Yf,.rOz-ezgC) 2

    where X and Y are the coordinates of the firer and target whenthe firer pulls the trigger. If the position location isinaccurate at the time of an engagement, the engagement rangecould be inaccurate. The range measuring system (RMS) at FortHunter Liggett records position location and engagement rangedata. An explanation of this system is given in [Ref. 5].The three main errors associated with position location dataare jitter, gaps, and spikes.

    11

  • 8/14/2019 Ad a247 186

    23/72

    (1) Jitter, This is a slight error in recordingvehicle location. The vehicle appears to be moving to a seriesof nearby points when it is stationary. This error is causedby slight errors in triangulation by the RMS at Fort HunterLiggett.

    (2) GaDs. These are losses of a vehicle'slocation for various reasons. The main cause is the vehiclemoving into an area where its signal cannot be received by theRMS.

    (3) Spikes. These are major errors in recordingvehicle location. A vehicle appears to move to a locationthat is clearly not logical given its previous location.Inaccurate triangulation by the RMS is the main cause of thiserror.

    b. Unknown Enaaaement RangesThere are several reasons for errors in recording

    engagement ranges. We have described in the above paragraphthe impact of inaccurate position location on engagementranges. This section focuses on why some engagement rangesare not recorded in the data.

    During the field test, technicians instrument weaponswith lasers. When a gunner pulls a weapon trigger, a laserbean is sent in the direction of the aimpoint. The firer'sidentification is recorded even if the target is unknown. Ifthe sensors on the target receive the laser beam, the RMS

    12

  • 8/14/2019 Ad a247 186

    24/72

    calculates the engagement range. If the laser beam is notreceived by the target, the RMS cannot determine the target'slocation; an engagement range is not calculated. Detailedreasons for these unknown engagement ranges are as follows.

    (1 Missed Target. The gunner aimed incorrectlyand legitimately missed the target. This error is most likelythe cause of most of the unknown engagement ranges.

    (2) Imro2er Laser Boresight. The laser and theweapon are not aiming in exactly the same direction. Thegunner correctly aims at the target, but the laser beamtravels in a slightly different direction and misses thetarget.

    (3) Iproper use of Sensors. The laser reachesthe target, but the sensors do not register because they havenot been properly emplaced or for some other reason cannotregister illumination by the firer's laser.

    (4 Attenuation of Laser Beam. The gunnercorrectly aims at the target, but the laser beam does notreach the target because it is attenuated by heat or dust.

    (5) Insufficient Power Output The gunnercorrectly aims at the target, but the laser does not havesufficient power to reach the target.

    (6) Buffer Overload. In this case the sensorson the target receive the laser beam, but buffers that act as

    13

  • 8/14/2019 Ad a247 186

    25/72

    temporary holding places for data become overloaded and theengagement is not sent to the main computer.

    4. Sample Size AnalysisThe limitations described above had a great impact on

    the sample size of engagement ranges. Tables II-V list thenumber of engagement ranges recorded in the data for each ofthe six trials we selected. These are the total number ofengagement ranges before selecting only the FRE's.Table II MlAl ENGAGEMENT SUMMARY

    TRIAL/ L099B L100B L112B L122B L123B L125BTARGETFST 37 38 29 22 24 13BMP 9 12 10 9 14 8OTHER 15 8 7 0 3 0TOTALKNOWN 61 58 46 31 41 21RANGESTOTALUNKNOWN 149 17 29 48 28 42RANGESTOTAL# ENGAG 210 75 75 79 69 63PERCENTKNOWN 29.04 77.33 61.33 39.24 59.42 33.33RANGES

    MIAl engagement ranges have the highest percentage ofknown ranges for each trial. The average percentage of knownranges for all six trials is 45 percent of all shots fired.

    14

  • 8/14/2019 Ad a247 186

    26/72

    The MIAl ranges were compared with Janus(A) data. The FRE'swere selected from the sample of known ranges. Chapter IVcontains the sample sizes of FRE's used in the statisticalanalyses.Table III FST ENGAGEMENT SUMMARY

    TRIAL/ L099B L100B L112B L122B L123B L125BTARGETMIlAl 4 2 0 3 2 16CFV 0 3 0 4 1 15OTHER 0 7 0 3 3 5TOTALKNOWN 4 12 0 10 6 36RANGESTOTALUNKNOWN 67 15 16 75 23 89RANGESTOTAL# ENGAG 71 27 16 85 29 125PERCENTKNOWN 5.63 44.44 0 11.76 20.69 28.80RANGES

    The average percentage of known FST engagement ranges forall six trials was 19 percent of all shots fired. With theexception of trial L125B, the sample size was less than fivefor each trial. This is not a good sample of the populationof FST engagement ranges. Thus, we used only trial L125B tocompare FST engagements to Janus(A).

    15

  • 8/14/2019 Ad a247 186

    27/72

    Table IV CFV ENGAGEMENT SUMMARYmi

    TRIAL/ L099B L100B L112B L122B L123B L125BTARGETFST 13 21 4 7 8 50BMP 19 155 7 39 40 167OTHER 89 115 47 28 19 21TOTALKNOWN 121 291 58 74 67 238RANGESTOTALUNKNOWN 463 812 97 146 97 1190RANGESTOTAL# ENGAG 584 1103 155 220 164 1428PERCENTKNOWN 20.72 26.38 37.41 33.64 40.85 16.67RANGES

    The average percentage of known engagement ranges for theCFV was 23 percent of all shots fired. Although this appearsto be a reasonable number, further analysis of the knownengagement ranges reveals that most of these ranges arerepetitive. The gunner usually fires several rounds insuccession at the same target using the 25mm chain gun. Theranges of these rounds recorded in the data are usually thesame. As shown in Section five, the number of FRE's is lowcompared to the total number of engagements. Thus, acomparison of these ranges between the test and Janus(A) wouldlead to inconclusive results.

    16

  • 8/14/2019 Ad a247 186

    28/72

    Table V BMP ENGAGEMENT SUMMARY

    TRIAL/ L099B L100B L112B L122B L123B L125BTARGETMIAI 0 3 1 0 2 3CFV 13 0 0 0 0 24OTHER 7 5 2 5 0 9TOTALKNOWN 20 8 3 5 2 36RANGESTOTALUNKNOWN 414 441 147 350 135 355RANGESTOTAL# ENGAG 434 449 150 355 137 391PERCENTKNOWN 4.61 1.78 2.00 1.41 1.46 9.21RANGES

    The average percentage of known engagement ranges for theBMP was only four percent of all shots fired. This low numberindicates the sample size is not a good representation of thepopulation of BMP engagements. Thus, a comparison of theseranges between the test and Janus(A) was not conducted.

    5. Correlation AnalysisStatistical evidence of the dependence of ranges in a

    multiple engagement is shown through a correlation analysis.An analysis of the MIAl and CFV ranges yields very highcorrelation between the first and subsequent ranges within anInter Fire Time (IFT) of 30 seconds.

    17

  • 8/14/2019 Ad a247 186

    29/72

  • 8/14/2019 Ad a247 186

    30/72

    only four data points out of a total of 584 rounds fired wouldlead to inconclusive results. Analyses of the CFV engagementsfor the remaining five trials yield similar results.

    The low percentage of known engagement ranges for allvehicles except the MiAl is caused by a combination of thelimitations previously discussed. As a supplement to the RMSsystem, TEC mounts through-sight video on vehicles. Videodata reduction techniques are then used to reconstructengagements not captured by the RMS. Due to lack of time andresources, this technique was used only on the LOS-F-Hvehicles. TEC is also working on a new system to replace theRMS. This new system, called KTOPS, uses radar to identifythe impact of the round. Knowing the impact of the round willallow the engagement range to be calculated even if the roundfails to hit the target. Post-trial reconstruction or theKTOPS system can be used in the future to enhance recording ofengagement ranges.

    19

  • 8/14/2019 Ad a247 186

    31/72

    III. JANUS (A) COMBAT MODEL

    A. INTRODUCTIONJanus is an event driven, high resolution combat model

    named after the Roman god who was guardian of portals andpatron of beginnings and endings. The model runsinteractively or systemically. In the interactive mode, thered and blue forces appear on separate video terminals. Themodeler views the battle as it occurs. He can affect thebattle like a commander by changing options such as vehicleroutes and speeds. In the systemic mode, the model runswithout a man-in-the-loop for a specified time. Terrain isdepicted with contour lines, vegetation, and cities.Graphical symbols represent one or more systems. Each systemhas one or more weapons. For example, a tank is a systemcontaining a main gun and machine guns.

    Janus uses the Night Vision Electro-Optical Laboratory(NVEOL) model for detection. When a system detects a target,an algorithm determines line of sight based on terrain andweather. If the system has line of sight, can range thetarget, has ammunition, and is not in a hold-fire status, itfires at the target. On the graphics display, a red line fromthe firer to the target designates a firing. The simulationresolves conflict by comparing random number draws to a

    20

  • 8/14/2019 Ad a247 186

    32/72

    probability of hit and kill database. Post-processing filesallow the analyst to collect a wide range of data. For moredetailed information on Janus see [Ref. 6].

    B. TEST DATA CONVERSIONA challenge in converting data was to take the test

    position location (PLS) data from each of the six field trialsand convert them into corresponding Janus scenarios. Withoutsuch a conversion it would be necessary to manually generatethe six scenarios. It is possible to misread the many gridcoordinates from the field test position location data ormisplace them on the Janus screen. The human eye can onlydistinguish six digits of the ten digit grid coordinates onthe screen. Additionally, it takes about six hours to placeall the grid coordinates manually. An automated processreduces this time to one hour.

    Modelers have converted data from the National TrainingCenter (NTC), located in Fort Irwin California, to run inJanus. Mr. Al Kellner, a programmer from TRAC White SandsMissile Range, has written an extensive FORTRAN program,called INITNTC, to convert NTC battles into Janus format.TRAC (Monterey) uses this program to analyze NTC battles usingJanus [Ref. 7]. The program takes four files of NTC data andmakes the necessary Janus files. Once the Janus files arecreated, the model is run normally. For a more detailedexplanation of the INITNTC program see [Ref. 8].

    21

  • 8/14/2019 Ad a247 186

    33/72

    The FORTRAN program developed in this thesis, PLSTRN(Appendix A), takes the field test PLS data file andtransforms it into four pseudo-NTC data files. The INITNTCprogram is then run using these pseudo-NTC files to create thenecessary Janus files. Minor changes to the INITNTC FORTRANcode were made to accommodate differences in grid zonedesignators between Fort Hunter Liggett and Fort Irwin.Records of these changes are on file at TRAC-Monterey.Figure 2 contains a diagram of the conversion process.

    u FilTest

    DataI10b.pls.d PLSTRN

    d play.datntimove.datntcrout.datntdls.jdatW INITNSTC

    forces.dlatsystemn.datdploy.datFigure 2 Conversion ProcessThe conversion process creates a scenario that replicates

    the force structure and vehicle routes of a field trial. Themodeler can adjust several input parameters to further enhancethe ability of the simulation to replicate a trial.

    22

  • 8/14/2019 Ad a247 186

    34/72

    C. MODEL INPUT PARAMETERS1. Weapon Characteristics (Ground Vehicles)

    Only the weapons used during the field test werereplicated in Janus. For example, during the field test, themain gun was the only weapon instrumented with lasers on eachMlAl tank. Thus, machine guns on the MlAl tanks were not usedin Janus.

    Janus has a database containing hundreds of parametervalues. Examples of these parameters are weapon aim time, laytime, reload time, and maximum range. Unless specificallystated in this thesis, these values were not changed as weassumed them to be correct. Subjective changes to theseparameters would decrease the reliability of our results.Communication between the tester and modeler will help toidentify the correct parameter values. Standardization ofthese values is necessary to avoid modifying them to producedesired test results.

    2. Ammunition Basic LoadsDuring the field test, each vehicle had a specific

    amount and type of ammunition for each weapon, called anammunition basic load. Testers limited the amount of laserfirings for each weapon to each weapon's ammunition basicload. Ammunition basic loads used in Janus were changed toreflect the ammunition basic loads used in the field test.

    23

  • 8/14/2019 Ad a247 186

    35/72

    3. TerrainIn Janus, terrain is simulated by a series of

    rectangular cells. Each cell contains the elevation at thelower left corner of the cell [Ref. 9: p. 14]. Fifty meterterrain resolution was used in the simulation because itcovered the entire player maneuver area at Fort HunterLiggett. In fifty meter terrain resolution, each side of thecell simulates fifty meters of actual terrain. A vehicle'selevation is determined by interpolation of the elevations atthe four corner points of the cell in which the vehicle islocated (Ref. 9: p. 15]. Elevation is important because it isused by Janus to determine line of sight between vehicles.

    Roads were added to the terrain data base according tothe Janus users manual using a terrain map of the maneuverarea.

    4. NodesIn the Janus display, a node is a triangle connecting

    two line segments. The line segments represent the vehicleroute. The node represents a change in direction of thevehicle route. A time can be placed on a node; a vehiclestops if it reaches a node before a time placed on a node. Avehicle continues when the simulation time is greater than thetime placed on the node. Janus users place times on nodes tocontrol the timing of the battle. INITNTC does create nodesfor each vehicle route; it does not associate a time with each

    24

  • 8/14/2019 Ad a247 186

    36/72

    node. These times were placed manually for our Janusscenarios to help synchronize the battles. The data for thenodes were collected by adding an output statement to theINITNTC FORTRAN code. The computer generates a file calledtimenodes.dat each time INITNTC is run. This file containsthe time associated with each node.

    5. WeatherThe test data included exact information about the

    visibility, temperature, and other weather conditions thatexisted during the field test. These conditions werereplicated as closely as possible in Janus.

    6. VerificationThe Janus verification program checks for errors in

    each scenario. Failure to assign a weapon against a target isan example of an error. Verification was conducted for eachscenario. All errors were corrected before the simulation wasrun.

    D. SIMULATION RUNSThe six Janus scenarios corresponded to the six field

    trials selected for analysis. Each scenario was runinteractively at first to insure the simulation was workingproperly. To determine the number of systemic runs perscenario for data collection, the criterion of determining a95 percent confidence interval for the mean MIAl engagementrange with length 100 meters was selected. A 100 meter

    25

  • 8/14/2019 Ad a247 186

    37/72

    confidence interval is a fairly tight bound on a mean thatcould range from 500 to 2500 meters. A point estimate 6 ofthe variance of the MIAl engagement range was computed usingdata from five simulation runs of trial L100B. Solving for nthe overall sample size of MIA1 engagements for a (1-a)100%confidence interval of length L in each scenario, using theCentral Limit Theorem, yields

    n a Z2 /2" 82(L/2)2

    or

    n - 1.962 -111559 - 172502

    The average number of engagements per run was 18. To obtaina sample size of 172 engagements, the simulation would have tobe run 172/18-9.55, or approximately 10 times. Thus, each ofthe six scenarios were run 10 times for a grand total of 60simulation runs.Z. DATA COLLECTION

    The Janus Analyst Work Station (JAWS) direct fire file wasused to collect the engagement ranges. This flat ASCII fileis generated from the main menu after a simulation run. AFORTRAN program computed the ranges from the X and Y locationof the vehicles at the time of the engagement using theequation described on page 11. A second FORTRAN program

    26

  • 8/14/2019 Ad a247 186

    38/72

    computed the first range of engagements (FRE's) using anInter-Fire Time (IFT) of 30 seconds. This inter-fire time wasbased on the values of aim time, reload time, and time offlight in the Janus database.

    27

  • 8/14/2019 Ad a247 186

    39/72

    IV. COMPARISON OF FIRST RANGE OF ENGAGEMENTS (FREs)A. GENERAL

    The purpose of this comparison is to determine if themeans and distributions of FRE's are the same for the fieldtest and Janus. Samples from the test consisted of FRE's fromtrials L099B, LI00B, Lll2B, L122B, L123B, and L125B. Samplesfrom Janus consisted of FRE's from ten simulation runs of eachof these six trials. Normality of each of the samples waschecked using the Kolmogorov-Smirnov and Chi-Square Goodnessof Fit tests. In general, the data are not normallydistributed. Thus, mostly nonparametric methods were used inthe comparison. A significance level of .05 was establishedbefore conducting sample comparisons. The statisticalsoftware, Statgraphics, was used in the analysis of engagementrange data [Ref. 10].3. ASSUMTIONS

    The following assumptions were made before conducting Two-Sample statistical tests.

    1. Independence Within Each SampleEach sample is a random sample from its respective

    population. The reason for analyzing FRE's instead of allengagement ranges was to improve the tenability of assumingindependence within each sample.

    28

  • 8/14/2019 Ad a247 186

    40/72

    2. Independence Between SamplesData in each sample are independent of data from other

    samples. This assumption implies that the samples from thefield test are independent of the samples from Janus.

    C. COMPARISON OF MEANSThe Mann-Whitney test was used to test the hypothesis

    that the FRE means are the same between the test and Janus.The assumptions for this nonparametric test are independencewithin and between samples. Field test trials were comparedwith corresponding Janus scenarios. Also, the test FRE's werepooled across trials and compared with the pooled Janus FRE's.Table VII contains the results of the tests. The test failsto reject the hypothesis that the means are the same in threecases indicated by asterisks in Table VII. Since the p-valueis less than .05 in the other 13 cases, the hypotheses thatthe means are the same between the test and Janus arerejected. For the three cases where the Mann-Whitney testfails to reject the null hypothesis, the power cannot becalculated. Closer analysis of these three cases wasconducted using the Two-Sample t-test as an approximate dataanalysis procedure. This test requires the additionalassumption of normality of the samples, but is generallyconsidered to be robust with respect to this assumption. Theadvantage of using the t-test is that the power of the testcan be calculated.

    29

  • 8/14/2019 Ad a247 186

    41/72

    Table VII MEAN COMPARISONSMann-Whitney Tests

    Test Sample Size/Mean; (p-value); Janus Sample Size/MeanFIRERvsTGT/ M1 vs FST M1 vs BMP FST vs M1 FST vs CFVTRIAL22/1619m 8/1141mL099B **(.5462) **(.1311)49/1639m 16/ 559m26/ 641m 55/ 567mL100B (.0000) (.0000)203/1919m 128/1937m25/1415m 10/1045mL112B (.0000) (.0001)72/2354m 62/2277m15/ 282m 6/ 467m

    L122B (.0000) (.0031)56/1370m 62/ 975m18/ 550m 12/ 852mL123B (.0000) (.0000)117/2619m 63/2788m13/ 873m 8/ 465m 10/1082m 7/1132mL125B (.0354) (.0010) **(.0869) (.0004)13/1355m 42/1400m 21/1583m 181/261119/ 951m 55/ 774mPOOLED (.0000) (.0000)

    I510/2039m 373/1858m** Fail to rejec"hyoesisthat means are equal betweenField Test and Janus.

    For the three cases under question, the Two-Sample t-testagreed with the Mann-Whitney test. Figures 3 through 5contain the power curves associated with the three t-tests.We wrote an A Programming Language (APL) program to calculatethe data for the power curves using [Ref. 11: p. 128]. Thepower function in Figure 3 indicates that if the alternate

    30

  • 8/14/2019 Ad a247 186

    42/72

    hypothesis is the difference between means is 500 meters, thenthe probability of accepting this hypothesis when it is trueis .20. If the difference is 1000 meters, the probability is.64. The power becomes high, .94, when the difference is 1500meters. A difference of 1500 meters is very large consideringthe range of the tank is approximately 3000 meters. Thus,even though the Two-Sample t-test fails to reject the nullhypothesis that the means are equal, the power of the test islow. Analysis of Figure 4 indicates even lower power thanFigure 3. At a difference of 500, 1000, and 1500 meters, thepower is .10, .31, and .65 respectively. Analysis of Figure 5indicates low power also with values of .15, .41, and .86 fordifferences between means of 500, 1000, and 1500 meters,respectively. Out of 16 comparisons, in 13 we conclude thatthe means are different. We have shown the low degree ofpower associated with the three tests that conclude that themeans could be the same. Overall, the means are significantlyhigher in Janus. Possible reasons for these differences arediscussed in Section E.

    31

  • 8/14/2019 Ad a247 186

    43/72

    POWER CURVE Ml vs FST FRE's

    0.6

    10.6-

    0.2--

    0--2100 -400-0 0 4020ALPHA.AFFERENCE BETWEEN MEANS (METERS)

    Figure 3 Power Curve (Kl vs FST)

    POWER CURVE MI vs SUP FRE'.TRIAL LOMS

    0.6-

    0.2

    04=0 -1600 400 Goo 1600 3m0ALPfWENCE BETWEEN MEANSW(ETERS)

    Figure 4 Power Curve (Xi vs EMP)

    32

  • 8/14/2019 Ad a247 186

    44/72

    POWERCURVE FSTwM1 FRE'STRIAL L1259

    0.8

    0O.6

    0.4

    0.2-

    0 t t t~-24 1440 480 48O 1440 240

    ALP ,lC.FERENCE BETWEEN MEANS (METERS)Figure 5 Power Curve (FST vs Ml)

    D. COMPARISON OF DISTRIBUTIONS1. Kolmoorov-Smirnov (K-S) Tests

    The Kolmogorov-Smirnov (K-S) test was used to test thehypothesis that the distributions of FRE's are the samebetween the test and Janus. The assumptions for thisnonparametric test are independence within and betweensamples. Table VIII contains the results of the K-S tests.The sample sizes are the same as reported in Table VII. Outof 16 comparisons, the K-S test rejects the hypothesis tha.the distributions are the same in every case but one. This isstrong evidence that the distributions are different.

    33

  • 8/14/2019 Ad a247 186

    45/72

    Table VIIIDISTRIBUTION COMPARISONS

    Kolmogorov-Smirnov Tests(p-value)FIRERvsTGT/ M1 vs FST M1 vs BMP FST vs M1 FST vs CFVTRIALL099B (.0375) (.0310)L100B (.0000) (.0000)L102B (.0000) (.0000)L122B (.0000) (.0195)L123B (.0000) (.0000)L125B (.0146) (.0002) ** (.2059) (.0019)POOLED (.0000) (.0000)

    **Fail to reject hypothesisthat distributions are the samebetween Field Test and Janus.2. Empirical Quantile-Quantile Plots

    To determine how the distributions differ, empiricalquantile-quantile (Q-Q) plots were drawn.

    a. Method of ConstructionThis plot is constructed by plotting the

    quantiles of one empirical distribution against another. Eachsample of FRE's is ordered from lowest to highest. If thesample sizes are the same, the ordered samples are plottedagainst each other. Since the sample sizes are different inth e data, th e quantiles from the larger data were linearlyinterpolated onto the smaller data set [Ref. 12: p. 55]. Theordered smaller sample was plotted against the inte taddata.

    34

  • 8/14/2019 Ad a247 186

    46/72

    b. InterDretationThe line Y=X is the reference for the empirical

    quantile-quantile plot. If the plot lies along the line Y=X,then the distributions are the same. If the plot has the sameslope with a different intercept, then the distributions arethe same with a shift in location. If the plot is a straightline with a different slope, then the variances of thedistributions differ. Figures 6 and 7 contain the empiricalquantile-quantile plots of the pooled data for the MlAl. Thepoints do not lie along a straight line, but along a quadraticcurve. It is clear that Janus produces longer engagementranges than the field test for the MIA1 up to 3000 meters.Beyond 3000 meters, the ranges are longer in the field test;this is because the maximum range for the MIA1 in the Janusdatabase is less than the maximum range for the MIA1 in thefield test. In the field test, six out of the 119 MiAl FRE'swere beyond 3000 meters; none were observed beyond 3000 metersin Janus. Figure 8 contains the differences between fieldtest and Janus quantiles plotted against the Janus quantiles.This plot shows that as the Janus ranges increase to 2500meters, the difference between the field test and Janus rangesincrease; the test ranges vary from 200 to 1500 metersshorter. Beyond 2500 meters, the differences decrease.

    Figure 9 contains the empirical quantile-quantile plot forthe FST versus the MIAl in Trial L125B. The plot appears tobe a straight line with slope equal to one and intercept equal

    35

  • 8/14/2019 Ad a247 186

    47/72

  • 8/14/2019 Ad a247 186

    48/72

    EMPIRICAL OUAN71LE-OUANTILE PLaTMI vaFST (6TRIALS)

    3000-

    12000-

    1000-

    0 100 30 3000 4M 0JANUSFigure 6 Q-Q Plot of pooled M1 vs FST FREI s (meters)

    EMPIRICAL OU NTLE-OUAN1LE PLOTMI ve iMP (6 TRIALS)

    3000-

    13000-

    10

    0 00 0 00 4000JANUSFigure 7 Q-Q Plot of pooled M1 vs BlIP FRE's (meters)

    37

  • 8/14/2019 Ad a247 186

    49/72

    DIffERENCES BETWEEN OUANTLFMl VS FST (TRIALS)1000

    % %

    0 IM 1000 150 0 2500 3000JANUSFigure 8 Differences Between Ranges (meters)

    EMPRCAL OUANTLE-CA.LALE PLOTFSTv MI (TRIAL LI2)3M02500

    Shif from In.//

    0 wo im iwO00 moo 30Figure 9 Q-Q Plot of FST vs Ml FRE's (meters)

    38

  • 8/14/2019 Ad a247 186

    50/72

    E. ANALYZING THE DIFFERENCES1. Range Versus TimeAll six of the trials have Red in the offense while Blue

    is in the defense. The Red forces start in the North and movetoward the Blue forces in the South. In general, as timeincreases, the engagement ranges should decrease because theopposing forces move closer to each other. Tanks remaining onhigh ground may get long range shots throughout the battle orblue forces positioned forward may engage early, but most ofthe engagement ranges should get shorter as the battleprogresses. A correlation analysis of range and time for thepooled Janus and field test data revealed that range isnegatively correlated with time. Since both Janus and fieldtest data support the idea that range decreases as timeincreases, why do the ranges differ?

    Clues about why Janus produces longer engagement rangesthan the field test are found by plotting range versus timefor a trial. Figures 10 and 11 are range versus time plotsfor trial Lll2B. The field test version of this trial hadfour blue tanks positioned North engage Red tanks ten minutesinto the battle at a range of 400 to 800 meters and thenwithdraw to the remaining blue forces in the South. The mainbattle occurred at 50 minutes at a range of 1000 to 1800meters. A blue tank positioned on high ground engaged at 4000meters near the end of the battle.

    39

  • 8/14/2019 Ad a247 186

    51/72

    The Janus version of this trial had the four blue tanksengage two minutes into the battle at a range 1100 to 3000meters. The main battle occurred at 35 minutes at a range of2300 to 2800 meters. Since the maximum range of the MIAl inJanus is less than 4000 meters, the 4000 meter engagementsobserved in the field test were not possible in Janus. Janusaccurately represented the flow of the battle. The onlydifference is that Janus engaged earlier at longer ranges.

    An interview with one of the tank company commanders fromthe field test indicated that most of the tank engagementsoccurred at close range mainly because of three reasons.

    The players are required to wear laser safe goggles thatrestrict vision to varying degrees.The heavy amount of dust stirred-up by mechanized vehiclesobscures targets.The undulations in the terrain restrict line-of-sight beyondabout 1000 meters.

    The tank company commander added that players did not wait fortargets to enter engagement areas before firing. Of the abovereasons, only the protective goggles would not be a factor inactual combat. Even if the players were not required to weargoggles, the other two reasons are sufficient to support shortrange tank engagements at Fort Hunter Liggett. Janus doesrepresent dust and terrain. The 50 meter terrain resolutionused in this simulation evidently does not represent theundulations and extent of vegetation the players actually faceon the ground.

    40

  • 8/14/2019 Ad a247 186

    52/72

    FIELD TEST FRE'STRIAL L112B4000- .

    3000-

    Cm w20001000

    I . . I .. . . I . . I . . , , I , , , , I

    0 10 20 30 40 50 0TIME (mn)Figure 10 Range versus Time - Test

    JANUS FRE%TRIAL L112B

    O F . . . .I.. . I... . . I...0 10 20 30 40 so soTIM Mln)Figure 11 Range versus Time -Janus

    41

    305i0....8 l l i m J l i m

  • 8/14/2019 Ad a247 186

    53/72

    2. Number of Rounds Fired in a Multiple Engagementa. Field Test DataRecall that we defined a multiple engagement as a tank

    firing at the same target within the inter-fire time. Thereis evidence in the field test data that the umber of roundsa tank fires in a multiple engagement, N, is a geometricrandom variable with p estimated to be .7391. The geometricrandom variable, N, records the trial number of the firstsuccess. Each trial has the same probability of success, p.For our model, a trial is the tank firing a round, and asuccess occurs when the tank stops firing at the target withinthe inter-fire time. A success occurs when,

    1. the target is hit,2. the firer is hit, or3. line of sight no longer exists between the firer and thetarget.Thus, the probabilities that a tank fires N rounds during a

    multiple engagement are estimated to be,P(N-l) a .7391(1-.7391)0 - .7391 ,

    P(W-2) - .7391(1-.7391)1 - .1928

    P(N-3) - .7391(1-.7391)2 - .0503 ,

    andP(A4) - 1- [P(N-l) + P(N-2) + P(N-3)] -. 0178

    42

  • 8/14/2019 Ad a247 186

    54/72

    Evidence for this geometric fit was obtained using the Chi-Square Goodness of Fit test shown in Table IX with the pooledMIA1 engagement data. The estimate for the parameter, p, wasobtained from

    3- 87 (1) 24(2) +6(3) 2(4) =1.3529119and

    17 .7391X 1.3529

    The hypothesis was not rejected becausep(X22 > .06514) = .9680 > .05

    Table IX CHI-SQUARE GOODNESS OFFIT TEST

    Ho: I Rounds per Multiple Engagement is GeometricI Rounds per Observed Expected [(O-E)2] Emultiple engag

    1 87 87.95 .010322 24 22.95 .048323 6 5.99 .000034 2 2.11 .00647

    Total 119 119 .06514degrees of freedom - fcells - #parameters estimated - 1 - 2

    43

  • 8/14/2019 Ad a247 186

    55/72

  • 8/14/2019 Ad a247 186

    56/72

    Janus indicates that tanks engage at longer ranges and thatline of sight is frequent and of longer duration.

    45

  • 8/14/2019 Ad a247 186

    57/72

    V. CONCLUSION AND RECOMMENDATIONS

    A. CONCLUSIONAt this time, Janus should not be accredited for post-test

    modeling of ground vehicle engagements because,1. statistically significant differences in tank engagementranges exist between Janus and the Line Of Sight-Forward-Heavy Initial Operational Test; and2. the test data were insufficient to support engagementrange analysis of the Cavalry Fighting Vehicle or BMP.

    Six of the 50 field test trials were analyzed. Janusconsistently generated tank engagement ranges 200 to 1500meters longer than those observed in the field test. Ingeneral, Janus accurately represents the flow of the battle,but engages targets earlier when the range is greater. Thissuggests that line of sight exists in the model when it doesnot exist in the field test.B. RECOMMENDATIONS

    1. Field TestThe field test data contained several engagements of

    unknown range. This trend was apparent in all the trials andespecially noticeable with the BNP, Cavalry Fighting Vehicle,and Future Soviet Tank. Improvements in instrumentation arenecessary to increase the fraction of engagements that haveengagement ranges in the data.

    46

  • 8/14/2019 Ad a247 186

    58/72

    2. JanusCurrently, the Defense Mapping Agency (DMA) prepares

    terrain for combat simulation models down to 12.5 meterresolution. Recommend analysis of engagement ranges in Janususing terrain resolution lower than 50 meters. This mayrequire segmenting the battle because terrain resolution lowerthan 50 meters does not cover the entire playing area at FortHunter Liggett. If Janus continues to generate longerengagement ranges, the terrain database should be analyzed foragreement with the actual terrain.

    The parameters in the Janus combat systems databaseshould be verified by a team of testers and modelers.

    The model contains a target selection algorithm thatpermits the firer to continue firing at the same target untilthe target is destroyed or moves out of line of sight. On thesurface, this appears to be a reasonable model of real worldengagements. However, in some cases during the simulation,the firer expends a large portion of the basic load at thesame target. This continual firing does not occur in thefield test. Modelers should analyze this algorithm to see ifmodifications are appropriate.

    Janus displays great potential as an effective tool inthe post-test modeling phase of the Model-Test-Model concept.Additional comparisons should be conducted to enhance both themodel and field test.

    47

  • 8/14/2019 Ad a247 186

    59/72

    APPENDIX A: FORTRAN CONVERSION PROGRAM

    PROGRAM PLSTRN

    * THE MAIN PROGRAM TAKES THE TEC POSITION LOCATION DATA* WHICH IS BY SECOND AND EXTRACTS EVERY MINUTE FO R JANUS* USE. JANUS CAN ONLY HANDLE 50 NODES. EACH TRIAL IS* ABOUT 50 MINUTES LONG, SO POSITION LOCATION DATA FOR* EVERY MINUTE RESULTS IN APPROXIMATELY 50 NODES PER* VEHICLE.

    * THERE ARE TWO REQUIREMEWLS FO R THIS PROGRAM. ONE, THE* POSITION LOCATION DATA FILE FROM TH E FIELD TEST. TWO,* TH E EXACT NUMBER OF VEHICLES RECORDED IN THE DATA.

    INTEGER SS, X, Y, NUMVEHCHARACTER PID*4,TIME*5, FILE*9

    COMMON NUMVEH

    PRINT *, 'INPUT FILE (EG L112B.PLS) ? (USE APOSTROPHES)'READ *,FILE

    OPEN (UNIT-10, FILE-FILE, STATUS- 'OLD', RECL-69,

    48

  • 8/14/2019 Ad a247 186

    60/72

    C CARRIAGECONTROL-' LIST')

    PRINT*, 'ENTER TOTAL # OF VEHICLES (INCL AIRCRAFT) INC TRIAL'PRINT*, 'TRIAL L099B=63'PRINT*, 'TRIAL L100B-621PRINT*, 'TRIAL Ll12B-67'

    * PRINT*, 'TRIAL, 122B-66'PRINT*8 'TRIAL L123B-61'PRNT*0'TRIAL L125B-670

    READ*, NUNVEHPRINT *, 'PROGRAM CONTINUING...'

    OPEN (UNIT-il, FILE-' CHRONORD.*DAT', STATUS- 'NEW'C,FORM-'FORN&ATTED')

    5 READ(lO,lO,END-30) TINESS,PIDoX,Y10 FORMAT(lOX,A5,lX,I2,lX,M4,lXI5,lXI5)

    * IF(SS.EQ.0) WRITE(11,20) TIM,PID,X,Y

    20 FORNAT(lXA5,':OO',2X,A4,2X,15,2X,15)GO TO 5

    30 CONTINUECLOSE (UNIT-1O)CLOSE (UNIT-il)

    49

  • 8/14/2019 Ad a247 186

    61/72

    PRIT*'FINISHED CONVERTING PLS'PRINT* I'IPROGRAM CONTINUING. ..'

    CALL FOURFILE

    END

    SUBROUTINE FOURPILE

    C*****THIS SUBROUTINE CREATES FOUR FILES--C*****(NTCMOVE999.DAT,NTCROUT999.DAT,NTCPLAY999.DAT,C*****NTCKILS999.DAT) WHICH ARE USED BY INITNTC TO RUN JANUS.C** ***ADDITIONALLY, BADGRID999.DAT CONTAINS ALL THE GRIDS FROMC*****THE TRIAL THAT WILL NOT FIT ON A 50X50 JANUS MAP.

    INTEGER LPN,X,YNTCTYPE, I PNUMVEH,JCACTERDATE*9 ,TINE*8, ECTYPE*29 SIDE*1 PID*3, BOGUS*64LOGICAL WRITEPLAY

    COMMON NUN EN

    OPEN (UNIT-lO, FILE- 'NTCROUT999 .DAT' ,STATUS- 'NEW')OPEN (UNIT-li ,FILE- 'NTCMOVE999. ,STATUS- 'NEW')OPEN (UNIT-36 FILE- NTCPLAY999. DAT ,STATUS- 'NEW')OPEN (UNIT-37,FILE- 'NTCKILS999. T,STATUS- NEW')

    50

  • 8/14/2019 Ad a247 186

    62/72

    C*****NTCKILS999.DAT IS A FILE WITH HEADINGS ONLY. IT IS NOTC*****NECESSARY TO RUN JANUS, BUT INITNTC LOOKS FOR THE FILEC*****AND WILL TERMINATE WITHOUT IT.

    OPEN (UNIT-l4,FILE= 'BADGRID999 .DA',STATUS= NEW')

    WRITE(l4,*) 'THESE GRIDS HAVE BEEN DELETED SINCE* WRITE(14,*) 'THEY DO NOT FIT ON A 50X50 JANUS MAP...'

    WRITE(14,*)

    C*****NTCMOVE999.DAT USES CHRONORD.DAT

    OPEN (UNIT=13,FILE-' CHRONORD.DAT,STATUS='OLD')WRITE(lO,*)WRITE(ll,*)WRITE(36,*)WRITE(37,*)WRITE(lO,*)'routs-all table'WRITE(ll,*)'move-all table'WRITE(36,*) 'pdscr table'WRITE(37,*)'kills-all table'WRITE(lO,*)'WRITE(ll,*)WRITE(36,*)WRITE(37,*)'

  • 8/14/2019 Ad a247 186

    63/72

    WRITE(107)

    WRITE (10, 11)1FRXT(':in',6,:pn'2,'udt2X:i),XS

    1CFRMT (eI,:Xtime 14X,' :y',2X,sde2X') XClITye(1,2)IxXl yX11WRITE (11,2)

    2 FRMT('------------------------

    C,: -------:----------WRITE(36, 33)WRITE (37,34)

    33 FORXAT(':lpn',3X,':pid',3X,I:side',2X,':org',17X,':ptypeC ')

    34 FORXAT(':tlpn',lX,':tpid',1X,':uide',lX,':result',C ':tiue',16X,':tx',3X,':.ty',3X,':flpn',2X,':fpid'C,2X, ':fwpn' ,2X, ':fx' ,3X, ':fy' ,3X, ':fy' ,3X, ':frat :')

    WRITE (3 ,*) IWRITE(37,*)l

    C*****DUNY DATEDATE-112 APR 90'

    C*****CREATING NTCNOVE999.*DAT

    LPN-07 LPN-LPN+l

    52

  • 8/14/2019 Ad a247 186

    64/72

    IF(LPN.EQ.NUMVEH+1) t.PN=1READ(13,20,END-4O) TINEITECTYPE,PID,X,Y

    10 CONTINUEIF(TECTYPE.EQ. 'All) THEN

    NTCTYPE=22SIDE='B'

    ELSEIF(TECTYPE.EQ. 'BR') THENNTCTYPE-2 9SIDE'IB'

    ELSEIF (TECTYPE.EQ. 'FT') THENNTCTYPE-1SIDE-'B'

    ELSEIF(TECTYPE.EQ. 'W') THENNTCTYPE-10SIDE'3'I

    ELSEIF(TECTYPE.EQ. 'OH') THENNTCTYPE-26SIDE'IB'

    ELSEIF(TECTYPE.EQ. 'CC') THENNTCTYPE-14SIDE-'B'

    ELSEIF (TECTYPE.EQ. 'AG') THENNTCTYPE-25SIDZ-'B'

    ELSEIF(TECTYPE.EQ. 'DX') THEN

    53

  • 8/14/2019 Ad a247 186

    65/72

    NTCTYPE-3SIDE-'0'

    ELSEIF(TECTYPE.EQ. 'HP') THENNTCTYPE-27SIDE-'0'

    ELSEIF(TECTYPE.EQ. 'TT') THENNTCTYPE1lSIDE-10'

    ELSEIF(TECTYPE.EQ. 'TV') THENNTCTYPE-2 0SIDE-10'

    ELSEIF (TECTYPE. Q. 'FF') THENNTCTYPE-25SIDE-10'

    ELSEIF(TECTYPE.EQ.'TH' ) THENNTCTYPE-23SIDE-'0'

    ELSE

    PRflNT*, 'DO NOT HAVE A NTCTYPE HATCH FOR TECTYPEC ',TECTYPE

    PRINT* f 'HAVE ASSIGNED IT A NTCTYPE OF 0 (ZERO) AND PUTPRINT*f'ON THE BLUE SIDE'PRINT*, 'PROGRAM CONTINUING.*......'NTCTYPEO0SIDE-'B'

    54

  • 8/14/2019 Ad a247 186

    66/72

    ENDIF

    IF((X.GT.50000.AND.X.LT.65000) .AND. (Y.GT.73000C .AND.Y.LT.88000)) THEN

    WRITE (11, 30) DATE,TIHE,LPN,SIDE,PID,NTCTYPE,X,YELSE

    WRITE (14,20) TIHE,TECTYPE,PID,X,YENDIF

    GO TO 7

    20 FORMAT(lX,A8,2X,A2,A3,1X,I5,2X,I5)30 FORMAT(': ',A9,1X,A8,':',13,

    C,3X, ':',A1,5X, ':',A3,3X, ':',12

    40 CONTINUE

    CLOSE (UNIT-li)CLOSE (UNIT-13)CLOSE (UNIT-14)

    C******CREATING NTCROUT999 DATOPEN (UNIT-Si,FILE-'NTCNOVE999. 'T,STATUS-LD')

    55

  • 8/14/2019 Ad a247 186

    67/72

    95 DO 76 J-1,5READ(81,82) BOGUS

    82 FORMAT(1X,A64)76 CONTINUE

    WRITEPLAY-.TRUE.83 READ(81,84,END-66) DATE,TINE,LPN,SIDE,PID,NTCTYPE,X,Y

    84 FORXAT(2X,A9, LX,A8,2X,I3,4X,A1,6X,A3,4X,I2,5X,15,2X,15)IF(LPN.EQ.I) THEN

    WRITE(1O,30) DATE,TIMELPN,SIDE,PIDNTCTYPE,X,YIF(WRITEPLAY) THEN

    C******CREATING NTCPLAY999.DATWRITE(36,85) LPNPID,SIDE,NTCTYPE

    WRITEPLAY-.*FALSE.ENDIF

    ENDIFGO TO 83

    66 CLOSE (UNIT-Si)OPEN(UIT8,FILE-' TCXOVE999 .DAT' ,STATUS- 'OLD')

    85 FORMT(': ',13,3X, : 'A3,3X ': ',Ai,5X, ':',20X,': ',12,4XC,''

    56

  • 8/14/2019 Ad a247 186

    68/72

    IF(I.EQ.NUMVEH+1) GO TO 99GO TO 95

    99 CONTINUEWRITE (10,2)

    U WRITE(37,*)'

    RETURNEND

    57

  • 8/14/2019 Ad a247 186

    69/72

    APPENDIX B: LIST OF ACRONYMSBMP Soviet Mechanized Infantry VehicleBTL Battle File from field test dataCFV Cavalry Fighting Vehicle, US ArmyFDTE Force Development Test and EvaluationFORTRAN Formula Translation Computer LanguageFRE First Range of EngagementFST Future Soviet TankIFT Inter-Fire TimeINITNTC Program to convert NTC battles to Janus formatIOT Initial Operational TestJAWS Janus Analyst WorkstationLOS FH Line Of Sight Forward Heavy Air Defense SystemMlAl US Army Abrams tankMTM Model Test ModelNTC National Training CenterNVEOL Night Vision Electro Optical LaboratoryPLS Position Location SystemRMS Range Measuring SystemTEC TEXCOM Experimentation CenterTEXCOM Testing and Experimentation CommandTOF Time Of FlightTRAC Training and Doctrine Analysis Command

    58

  • 8/14/2019 Ad a247 186

    70/72

    REFERENCES1. Hollis, Walter W., "Verification and Validation (V&V) andAccreditation of Models", Department of the Army, Office ofthe Under Secretary, Washington, D.C., October 1989.2. Hollis, Walter W., "Model-Test-Model," Department of theArmy, Office of the Under Secretary, Washington, D.C., October1990.3. Bundy, D., and Creen, M., "Generic Model-Test-Model UsingHigh Resolution Combat Models," TRADOC Analysis Command,Monterey, CA, unpublished.4. TRADOC Test and Experimentation Command, ExperimentationCenter, Line of Sight-Forward-HeavM Initial ODerational Test(LOS-F-H-IOT) Test Report, Fort Ord, CA, September 1990.5. United States Army Combat Developments ExperimentationCenter, Instrumentation Handbook, 2d ed., Fort Ord, CA, July1985.6. Department of the Army, Janus(T) Documentation, TRAC-WhiteSands Missile Range, NM, June 1986.7. TRADOC Analysis Command Report TRAC-RDM-191, Comparison ofthe Janus(A) Combat Model to National Training Center (NTC1Battle Data, by CPT(P) David A. Dryer, March 1991.8. Bowman, M., Integration of the NTC Tactical Database andJanusT), Master's Thesis, Naval Postgraduate School,Monterey, CA, March 1989.9. Pimper, Jeffrey E. and Dobbs, Laura A., Janus AlgorithmsDc n, Lawrence Livermore National Laboratory, Livermore,CA, January 1988.10. Statistical Graphics Corporation, STATGRAPHICS.Statistical Grahics System, University ed., STSC, Inc., 1988.11. Thomson, Norman, APL Programs for the MathematicsClasro, Springer-Verlag, 1989.12. Chambers, John M.; Cleveland, William S.; Kleiner, Beat;and Tukey, Paul A., Graphical Methods For Data Analysis,Wadsworth and Brooks, 1983.

    59

  • 8/14/2019 Ad a247 186

    71/72

    INITIAL DISTRIBUTION LIST

    No Copies1. Commander 1U.S. Army Training and Doctrine CommandATTN: ATCDFort Monroe, VA 23651-50002. Deputy Undersecretary of the Army 1for Operations ResearchATTN: Mr. HollisRoom 2E660, The Pentagon

    Washington, D.C. 20310-01023. Director 1U.S. Army TRADOC Analysis Command-Ft. LeavenworthATTN: ATRC-FOQ (Technical Information Center)Fort Leavenworth, KS 66027-52004. Director 1U.S. Army TRADOC Analysis Command-WSMRATTN: ATRC-WSL (Technical Library)

    White Sands Missile Range, NM 88002-55025. Chief 1U.S. TRADOC Analysis Command-Monterey

    ATTN: CPT Dennis Bundy, USAP.O. Box 8692Monterey, CA 93943-06926. Library, Code 52 2Naval Postgraduate SchoolMonterey, CA 93943-50027. Director 1ATTN: ATZL-TALFt. Leavenworth, KS 66027-70008. Defense Technical Information Center 2

    ATTN: DTIC, TCACameron StationAlexandria, VA 22304-6145

    60

  • 8/14/2019 Ad a247 186

    72/72

    9. U.S. Army LibraryArmy Study Documentation and Information Retrieval SystemANRAL-RSATTN: ASDIRSRoom IA518, The PentagonWashington, D.C. 20310

    10. Professor Donald Barr, O de MA/Ba 1Department of MathematicsNaval Postgraduate SchoolMonterey, CA 93943-500011. Department of the Army 2HQ PERSCOM (TAPC-PLO)ATTN: CPT Allen C. East200 Stovall StreetAlexandria, VA 22332-040612. DirectorTEXCOM Experimentation Center

    Fort Hunter Liggett, CA 93928