CHAPTER III

SPECIFICATION OF MODEL PARAMETERS


To create a quantitative model based on the ethnohistoric evidence, we must identify the essential parameters which would induce the observed decline in soil fertility over a period of time. Ultimately the derivation must be reformulated into mathematical terms. If we start with a population of size p which requires n units of maize per person per unit time then we estimate that (np)/y units of land will be required, where y is the average yield potential per unit area. Our problem involves predicting what these values will be at some time, t, recognizing that p and y (and later, n) are changing functions of time expressed as P(t) and Y(t), respectively.

Assuming, for the moment, that per capita maize requirements remain essentially constant, the land requirement, l, at time t is:

l = nP(t)/Y(t)       (3.1)

Population, under normal conditions, will be expected to be nondecreasing. Because we know from observation that soil used under aboriginal conditions becomes depleted, we can infer that Y(t) is a decreasing function of time. It follows that l can be defined as a nondecreasing function, L(t). If L(t) reaches some threshold value related to the maximum amount of land available or the maximum amount of land that can be worked under technological and manpower restraints, the society must make adjustments to reduce the subsistence stress. Because their technological system precludes the implementation of soil

[34]

improvement practices, aboriginal options were limited to population controls such as migration (fissioning of villages), adjustments in biological response (e.g. reduction in body size), or decreasing the intrinsic rate of growth (such as infanticide and war). The purpose of modeling aboriginal agriculture is to define when such stresses would occur so that we can anticipate the impact on the archaeological record.

Recreating the dynamics of Mississippian agriculture means being able to approximate the form of P(t) and Y(t) as well as estimate the value of n and the threshold values of L(t). Our estimates must be accurate within the same order of magnitude of the actual, but unobservable, values during the Mississippian. For archaeological applications such accuracy is reasonable and sufficient. This chapter provides arguments for the acceptance of certain functions and parameter values which answer these needs.

This involves defining population dynamics, yield potentials, yield reduction, and per capita consumption as physical processes bounded by a range of potential values. The structure of Mississippian agriculture is composed of a set of production options and limitations. By understanding the interactions, we can construct a viable model that can be applied to a specific archaeological context (Chapter IV).

Population Dynamics

The growth rate function is modeled in terms of some range of initial population densities over the entire Mississippian Period. To do this, population parameters must be developed to reflect prehistoric

[35]

conditions as revealed by paleodemographies (and constrained by the inherent limitations of such data).

The approach of paleodemographic studies primarily incorporates life table methods, assuming a stationary population (i.e. no significant increase in size) as presented by Acsadi and Nemeskeri (1970) and Ubelaker (1974). Although such a model provides some useful information (Hall 1978), I do not believe it is acceptable to assume stationary conditions existed during the entire span of the Mississippian Period simply to facilitate the use of life table data. Howell (1973) and Weiss (1973,1975) argue that we should use models developed from large populations over time in the study of prehistoric demographics. The established Coale-Demeny models (Coale et al. 1983) serve this purpose.

Their models were derived from 326 male and female life tables taken from populations recorded over the last century or more. They include samples from Africa (15), North America (18), Latin America (33), Asia (32), Europe (206), and Oceania (22). The results (see Coale et al. 1983:1-36 for the methodology) were categorized into four families (North, South, East, and West) of tables each with 25 mortality levels for each sex. Given certain empirical data, such as estimates of life expectancy and the gross reproductive rate for females, models can be selected from which the underlying parameters of the population can be derived. The advantage of this approach lies in its allowance for interpolation between "real" data instead of relying on the extrapolation of unobserved information from incomplete archaeological material (see Angel 1969).

[36]

Instead of assuming a stationary condition, one can recognize the underlying stability which all populations tend to reflect and use it to generate an alternative representation. Such stable populations display a constant rate of increase derived from a prolonged "prevalence of an unchanging fertility schedule" (Coale et al. 1983:7). The original concept of stability was defined by Lotka (1907) as a population whose age distribution was:

c(x) = bP(x)e-rx
where : c(x) is the proportion of individuals at age x
b is the birth rate
r is the annual rate of increase
P(x) is the proportion surviving to age x.

Every life table tabulation and its computed r value implies a "determinate age composition, with an associated birth rate and death rate" (Coale et al. 1983:7). Stability is defined in terms of this unchanging age distribution.

The selection of appropriate mortality schedules for each sex requires that we make some basic assumptions about the characteristics of late prehistoric demographics. Generalized knowledge (Hassan 1975: 43; Angel 1975) of early population dynamics would indicate that we should expect:

  1. a high infant mortality rate, probably around 50%,
  2. a life expectancy at birth of 20 years,
  3. a low average maternity schedule (m), probably around 23 years, and
  4. a large female gross reproductive rate for m [GRR(m)] in the range of 4.0 to 5.0 female children.

[37]

Given a high infant mortality rate and a low life expectancy, West level 1 mortality models are the most suitable. Some of the basic population statistics adjusted for a GRR(23) range of 2.0 to 4.5 for these models are presented in Tables 1 and 2 (higher GRR values cannot be interpolated from the original tables when m is as low as 23 years). The Net Reproductive Rate (NRR) represents the average number of daughters that reach adulthood born to each woman. The Dependency Ratio is the proportion of individuals of each sex less than 15 and over 60 years of age to the rest of the population. The Generation Length is calculated following Pollard (1973:35) for those situations where the growth rate is greater than zero. It represents the number of years required to increase the birth rate by a factor of NRR times its original value. The mean number of male and female offspring that must be born to each adult woman if her cohort is to be replaced prior to reaching menopause is the Average Family Size (Weiss 1973:39).

Life expectancy at birth for the female tables is 20.0 years. For the males it is 17.4 years. The growth rates range from -0.014933 when GRR(23) equals 2.0 to 0.022205 for a GRR(23) of 4.5. Hassan's (1975:42) estimate of 0.007 to 0.017 maximum potential annual growth for hunter/gatherers suggests the validity of using the GRR(23) value of 4.0 (r = 0.016690). However, this implies an average gross family size of over eight children, well beyond Angel's (1975:183) 4.7 estimate used by Hassan (1975:43). This would mean that either Angel's (1969:432) method of using pubic changes to estimate the average number of births per woman is invalid or these stable models are inappropriate for our use. In the absence of more reasonable data, we will assume that our

[38]

Table 1. West Level 1 female statistics.
  GRR(23.0)
Parameter 2.0 2.8 3.0 4.0 4.5

Birth rate 0.032801 0.050416 0.054341 0.073211 0.081591
Death rate 0.047734 0.050416 0.051014 0.056521 0.059386
Growth rate -0.014933 0.0 0.003327 0.016690 0.022205
NRR 0.656 1.017 1.097 1.586 1.845
Average age 31.126 25.589 24.355 20.249 18.788
Percent 15-44 46.793 46.05 45.887 42.715 40.983
Dependency ratio 0.560 0.719 0.754 0.992 1.117
DR over age 1 0.036606 0.033249 0.032501 0.031520 0.031512
DR over age 5 0.033211 0.027541 0.026278 0.022814 0.021714
Avg age at dth 30.605 20.338 18.050 11.793 9.947
Avg age dth >5 48.137 41.586 40.127 34.449 32.290
Generation length - - 27.8 27.6 27.6
Avg family size 3.4 5.2 5.6 8.1 9.4

NRR = net reproductive rate
DR = death rate
Avg = average
dth = death


DR = death rate
Avg = average
dth = death
Table 2. West Level 1 male statistics.
GRR(23.0)
Parameter 2.0 2.8 3.0 4.0 4.5

Birth rate 0.037075 0.055883 0.060073 0.080109 0.089022
Death rate 0.052009 0.055883 0.056746 0.063429 0.066819
Growth rate -0.014933 0.0 0.003327 0.016690 0.022205
Average age 29.597 24.550 23.425 19.661 18.315
Percent 15-44 49.263 47.660 47.303 43.633 41.753
Dependency ratio 0.533 0.705 0.743 0.985 1.110
DR over age 1 37.381 33.780 32.978 31.678 31.530
DR over age 5 33.965 28.108 26.803 23.062 21.839
Avg age at dth 27.645 18.358 16.289 10.666 9.000
Avg age dth >5 46.488 40.828 39.567 34.622 32.693

[39]

population parameters based on the West level 1 model are usable. Based on all the data, we will adopt a range of growth between 0.003 and 0.017 corresponding to GRR(23) values of 3.0 and 4.0, respectively. Such rates should be appropriate for Mississippian development beginning around A.D. 900. The average family size, allowing for infant mortality, would probably lie between four and five individuals at any one time.

To estimate future population size at some time, t, we will use :

Pt = P0ert (3.2)

where P0 is the initial population size, r is the intrinsic growth rate, and t is time in years (Weiss 1973:73). As populations cannot grow exponentially indefinitely, anticipated adjustments to the population size can be modeled in a number of ways. The growth rate could be seen as a function inversely related to some static maximum population size.

rt = 1 - (Pt/K) where: K = maximum population size (3.3)

Hassan (1978:70) discusses such a damped growth function. However, use of such a logistic model for growth has not been shown to be reflective of human response to such stress (Pollard 1973:23) (cf. Harpending and Bertram 1975).

Instead of adjusting the value of r in this manner, we could recognize that stable populations maintain relatively consistent growth patterns until such time as they exceed some viable population size. This maximum size can be related to the abstract notion of fluctuating carrying capacity. Although it is an extremely difficult task to calculate

[40]

this value (see Brush 1975 and Street 1969), we can accept its existence as an index of a culture's capacity to exist under specific environmental, technological, and societal limits. For agricultural systems such as those of interest here, this will involve relating productive needs against productive capabilities. When the limits are reached, however they are measured, society will restructure the conditions as best it can to reduce the associated stress. In the absence of technological shifts and social adjustments, this can involve adopting instantaneous population reduction measures, such as infanticide and war, or encouraging migration to reduce the population density in one area. We will calculate the frequency of such stresses as they relate to agricultural dependence and examine the consequences of proposed Mississippian solutions to the problem.

Estimating Yield Potential

Anthropological and botanical research into the development and use of early forms of Zea mays L. has almost exclusively been directed towards describing their origins and evolution rather than their productive capabilities (Galinat 1977; MacNeish 1964; Mangelsdorf 1974; Mangelsdorf et al. 1964,1967). This study requires specific botanical data on yield potentials and cultivation requirements for pre-contact varieties given the farming technology of the period. Pending publication of such information (e.g. Cutler and Blake n.d. cited in Brown and Goodman 1977:75), we must investigate the possibility that documented pre-hybrid varieties were sufficiently similar to their earlier ancestors to justify extracting physiological data from historical

[41]

records. This is reasonable given the antiquity of the various forms of maize and the slow rate of genetic change prior to intensive and systematic hybridization in the early 1920s. With such information we can evaluate the effects of early farming practices on long term productivity, Y(t).

To bracket yields per unit area over time, we must separate genotypic variability from environmental effects. As a result of microadaptations to specific conditions, each variety of maize has a certain maximum yield potential that can be achieved under optimal conditions. Such a value is independent of geographical locale and serves as a baseline about which we can judge the cultural and environmental influences on yield. Fortunately, maize adaptability to specific conditions allows us to ignore geographical influences when considering yield as a function of time. Maturation periods will vary with latitude but production potential under single cropping will be essentially constant within each variety. This section will examine the varietal influences on yield and attempt to estimate the prehistoric yields of early forms of maize.

Races of Maize

Anderson and Cutler (1942) developed the concept of maize "races" to classify forms that shared "enough" characteristics to make them recognizable as a single group. Brown and Goodman (1977:49-52) have outlined the development of the race concept noting the extensive efforts made to isolate racially important characters. Early emphasis was placed on obvious tassle, ear, and kernel variability. Today, chro-

[42]

mosome knobs are used to help delineate specific movements of racial groups between geographic areas. Unfortunately only a few of these attributes are observable in an archaeological setting, resulting in a limited level of discriminatory resolution.

Archaeological samples from the eastern Woodlands tend to be placed into one of two general groups: Basketmaker or Eastern Complex (Yarnell 1964:107-120). The Basketmaker race is derived from the Southwest and represents the earliest (Middle Woodland) recognized "race" of maize in the East. Its form persists into the Mississippian period at Cahokia, west central Illinois. Basketmaker cobs have an elliptical cross section, small shanks, and tapered ends. The number of rows is usually 12 or 14 (here denoted 12/14) (Yarnell 1964:111-112). The Eastern Complex (sometimes just called Eastern) is characterized by 8/10/12 row ears with observable row pairing that produces square, pentagonal, or hexagonal cross sections. The kernels are usually crescent shaped with a height generally less than their width. These flint and flour forms appear in the East as secondary elements among Basketmaker-like samples dated as early as A.D. 1000 (Yarnell 1964:107).

Most research (Goodman and Bird 1977a,1977b) into early maize development concentrates on Latin American complexes largely because only nine of the recognized 169 races of modern maize are found outside the region (Galinat 1977:19; Brown and Goodman 1977:72-73). In their non-archaeological survey of maize races, Brown and Goodman (1977:72) acknowledge that very little is known about the extent of pre-hybrid varieties in the United States. Ignoring pop and sweet forms of maize, the modern races found north of the Rio Grande are :

[43]

  1. Northern Flints,
  2. Great Plains Flints and Flours,
  3. Pima-Papago,
  4. Southwestern Semidents,
  5. Southwestern 12 Row,
  6. Southern Dents,
  7. Derived Southern Dents,
  8. Southeastern Flints, and
  9. Corn Belt Dents (Brown and Goodman 1977:72-73).

Based upon Brown and Goodman's (1977:73-79) overview, it appears that, on the basis of origin and antiquity, the races of Northern Flints, Great Plains Flints and Flours, and Southern Dents encompass the contact varieties relevant to this study. Additionally, the Pima-Papago race may be important due to its hypothesized relationship to the earliest forms (Basketmaker) found in the East (Brown and Goodman 1977:75; Yarnell 1964:111).

Most of the maize grown north of Georgia and east of the Mississippi River during the pre-Colonial period can be classified as Northern Flint. Its origin has been traced to either the Harinoso de Ocho of Mexico or the San Marceno and Serrano races of highland Guatemala. The many races found in the Great Plains often appear to be the product of crosses between Southwestern varieties and Northern Flints. Such mixing has been observed in ethnobotanical remains from fifteenth century sites in South Dakota. The complex of floury Southern Dents was extensively grown during the Colonial period as far north as Maryland. Many varieties seem to be northern adaptations of the Tuxpeno, Pepitilla,

[44]

Tabloncillo, and Olotillo races of central Mexico (Brown and Anderson 1947,1948; Brown and Goodman 1977:73-77).

The relatively short stalked Northern Flints characteristically produce two, 8/10 row, ears per plant. Like their eastern relatives, the more variable Plains races tend to display 8/10 rows although 12/14 rows are not uncommon. The plants also tend to be shorter than the Northern Flints. The Southern Dents produce the tallest plants of all the races found in the United States. The number of rows per ear range from 8/10 (var. Hickory King) to 24/26 (var. Gourdseed) (Brown and Anderson 1947,1948; Brown and Goodman 1977:73-77).

At what time, then, do these gross morphological types of maize first appear prehistorically? Although it is difficult to generate a one-to-one correspondence between archaeological and modern samples, ethnobotanical research confirms a relatively great antiquity for many of the modern races. Some forms (Tuxpeno and Olotillo) linked to Southern Dents have been dated to the seventh century A.D. (Mangelsdorf et al. 1967:197), making them potentially available to spread northward into the Southeast by Mississippian times. A form of the Southwestern Pima-Papago race is considered to have been transported to the upper Mississippi Valley by A.D. 100 and to the coast of Georgia even earlier (Brown and Goodman 1977:75). The assumed Northern Flint ancestor, Maiz de Ocho, is recognized in the Southwest as early as A.D. 700 (Mangelsdorf 1974:113). As already noted, 8/10 row flint varieties begin to appear archaeologically in the East around A.D. 1000 (Brown and Anderson 1947:10).

[45]

Over the last 7000 years, the only fundamental change in the botanical characteristics of maize has been an increase in cob and kernel size (Mangelsdorf et al. 1967:200). Mangelsdorf notes:

. . . it is true that in the hands of Indian cultivators maize had reached a high state of development when America was discovered. All of the principle commercial types of corn recognized today: dent, flint, flour, pop, and sweet, were already in existence when the white man appeared on the scene and, until hybrid corn was developed, the modern corn breeder, for all his rigorous selection, had made little progress in improving the productiveness over the better Indian varieties [1974:207].

Therefore, it is highly probable that most of the Mississippian varieties were similar enough to the flint and flour types grown by the early settlers that their production characteristics can be extrapolated from pre-hybrid historical records as well as data from hypothesized parental components in the Southwest, Mexico, and Guatemala.

Yield as a Function of Race

Approximating the yield of various races of prehistoric maize is difficult from two perspectives. First, our concept of archaeologically recognizable races does not carry with it quantitative values related to productivity. At best we can infer certain value ranges based on supposed connections with extant races, if these links are not too far removed temporally. For example, Yarnell's Eastern varieties consist of both flint and flour forms whose descendents are the modern, but no longer commercially viable, Northern Flints. To estimate the yield potential of these Northern Flints we could either conduct experiments with existing seed stored at various seed banks or examine the production records of the early nineteenth century. The first approach would

[46]

be useful but beyond the scope of this project. Using historical records provides an upper bound for late contact maize production but does not give us specific information about earlier forms, such as the Basketmaker complex. This brings us to the second obstacle.

Our knowledge of the earliest forms of Mississippian maize is limited to fragmentary morphological characteristics of kernels and cobs. Although useful in terms of developing a classification system (see Nickerson 1953), they provide little insight into actual yields. We are left with the dilemma of not being able to directly measure the changes in crop yields related to varietal improvements. We must develop some method of accounting for such improvements, given the observed morphological trend of decreasing row number (Cutler 1956) and increasing cob size through time.

One way to approach this problem is to isolate the relative characteristics of the earliest forms and compare them to the later races. Using Nickerson's (1953) data we can begin to construct such a measure. Of the samples he documents, Iroquois Sacred Flour, Northern Flint, and Basketmaker are the most relevant to this study. Table 3 lists the average values (Nickerson 1953:88) of the standard morphological characteristics of each racial group. The Iroquois Sacred Flour sample represents one form of the latest Northern Flint descendent. It is described as having ear lengths of 20 to 28 cm. The Northern Flint collection is a composite grouping of both Northeastern and upper Great Plains varieties. No range for cob length is provided. The Basketmaker cobs were taken from sites in northern Arizona. Their length ranged between 6 and 12 cm.

[47]

Table 3. The average values of four morphological characteristics of maize cobs.

Sample Cupule widthe (mm) Shank Diameter (mm) Kernel Thickness (mm) Glume Width (mm)

Iroquois Sacred Flour 11.5 22 3.7 8.7
Northern Flint 9.5 16 4.4 7.0
Basketmaker 6.0 9 4.0 4.4

Although kernel thickness remains fairly constant, all other characters increase by a factor of two from earliest to latest. We can, therefore, expect a twofold increase in potential productive capability from the earliest introduction to the colonial period. This increase is solely defined on the basis of racial differences and not on improvements within a variety. Such varietal enhancements would only become quantitatively important when two distinctively different races are crossed. For example, the ancestors of modern Corn Belt Dents were formed by crossing Northern Flints with Southern Dents (Brown and Anderson 1947,1948). The earliest date for such mixing is probably around A.D. 1840, because it was at this time that we find evidence of 40 maize varieties of various racial origins (Bowman and Crossley 1911:3). This represents an eightfold increase in variability over the five varieties (four flints and one dent) known to have existed in 1814 (Bowman and Crossley 1911:3).

There is no indication archaeologically or ethnohistorically that significant crossbreeding between races occurred at such a level during

[48]

the late prehistoric period. Further, recent carbon isotope research on skeletal remains (Lynott et al. 1986) suggests that at least some Emergent Mississippian (A.D. 900 to 1000) cultures did not rely heavily on maize. During this period we would expect to find the Basketmaker varieties being grown in garden situations. Heavier dependence on maize would seem to be correlated with the acceptance of the more productive flint varieties ca. A.D. 1000 to 1200.

The earliest introduction of dents into the Southeast is unclear. Brown and Anderson (1948:256) note that Beverly's history of Virginia discusses dented corn in 1705. They divide the dents into Old and Derived which separates Mexican-like dented types from colonial flint-flour-dent crosses. Two of the most common Old varieties are Hickory King and Gourdseed/Shoepeg (Brown and Anderson 1948:263-264). Hickory King appears to be one of the oldest dents with several flint-like characteristics such as 8/10 rows, row pairing, narrow cylindrical ears, and wide kernels. Gourdseed and Shoepeg are more typical of the dents with longer kernels, prominent denting, and large row numbers (greater than 16). Despite their assumed late prehistoric arrival into the eastern United States, the lack of supporting archaeological data would indicate that they were either not heavily used until the colonial period or were not yet recognizably different (like var. Hickory King) from the various flint varieties. Given either the absence or similarity argument, our model can deal strictly with the productive capability of the Northern Flint form for most of the Mississippian Period.

[49]

What can we expect the yield per unit area to have been for these varieties? Yield is the product of environmental, physiological, and technological influences. For the moment we will ignore the first two factors and concentrate on the technological or behavioral effects of aboriginal agriculture. Given a particular variety of maize, its yield will be dependent on the density of plants per unit area. Modern Corn Belt varieties produce a maximum yield when the plant densities range between 40,000 and over 100,000 plants per hectare (pph) (Larson and Hanway 1977:645). Row widths of 40 to 100 cm and corresponding ranges of plant spacings of 62 to 10 cm would be needed to achieve such densities for modern drilled corn. The success of different plant densities will largely be dependent on soil conditions, climate, and the ability to optimize the leaf area index relating upper leaf area to ground surface (Bowman and Crossley 1911:172-174; Larson and Hanway 1977:645).

Based on the ethnohistoric evidence, aboriginal planting consisted of widely spaced (three to six feet) hills with several plants per hill. Such a practice is very similar to that used in the Corn Belt prior to the self-propelled mechanization of agriculture in the mid-twentieth century. The traditional checked planting rate was set at three seeds per hill with 3.5 ft (1.07 m) between hills and rows (Bowman and Crossley 1911:102). This would result in 3556 hills per acre (hpa) (8788 hills per hectare [hph]) and up to 10,668 plants per acre (ppa) (26,364 pph). Four or five plants per hill were possible without a loss in yield if the climatic and soil conditions were optimal. On poorer soil, it was recommended that fewer seeds be planted per hill and the

[50]

hills be spaced up to six feet apart resulting in more than a 66% reduction in plant density and yield.

The average pph for four Corn Belt states (Indiana, Iowa, Illinois, and Minnesota) in 1973 was 47,400 (Larson and Hanway 1977:645). Thus, traditional densities were slightly more than half that of today. The average yields in the United States for the years 1850, 1910, 1945, and 1973 were 16.0 quintals/ha (25.5 bu/acre), 21.0 quintals/ha (33.5 bu/acre), 17.8 quintals/ha (28.4 bu/acre), and 58.9 quintals/ha (93.8 bu/acre), respectively (Bowman and Crossley 1911:14; Larson and Hanway 1977:625). (Appendix A contains definitions of the units of measure used in this study). The 1880 average yield in Tennessee was 21.6 bu/acre (13.6 quintals/ha) (Hawkins 1882:64). Clearly, pre-hybrid yields were relatively consistent and slightly greater than the estimates of 18 bu/acre (11.3 quintals/ha) made by Rutman (1967:43) for the Plymouth farmers and the calculated 11.4 bu/acre (7.2 quintals/ha) average for 1835 Eastern Cherokee Reservees (Bureau of Indian Affairs 1835). Modern varieties are at least twice as productive as the best of the Northern Flints and plant densities are similarly twice that of aboriginal systems. We should, therefore, be able to reasonably expect the average upper productive limit of aboriginal agriculture on the best soils and under optimal patterns to be between 11.3 and 18.8 quintals/ha (18 and 30 bu/acre) for Northern Flints and probably a fourth of that for Basketmaker varieties for a range in maximum possible yield of 4.7 to 18.8 quintals/ha (7.5 to 30 bu/acre) for the Mississippian Period. Actual or average yields will be dependent on differing environmental constraints such as soil type, soil condition, and weather conditions.

[51]

To address the real potential for maize agriculture we must examine the factors that annually limit yields.

Factors Influencing Reduced Yields

So far we have concentrated on the static production potential of specific races of maize available to late prehistoric populations holding all other variables constant. We know from observation that these yields vary as a decreasing function of time. Several external factors, such as weather, pests, disease, and nutritional deficiencies, tend to limit production in various ways. Considerable research has examined the process of soil depletion in terms of Anglo-American farming practices since the Colonial Period (Bonner 1964; Craven 1926; Hall 1905, 1917). Craven (1926) notes that the tobacco plantation system practiced a form of shifting agriculture much the same as that outlined in Chapter II. In the long leaf pine zones of the cotton states, first to third year corn production could drop from 25 to less than 10 bu/acre. Soil in short leaf pine environs could be exhausted in five to seven years. Under oak-hickory conditions soil might produce for up to 12 years. If we could empirically measure the response curves of the most important factors as functions of time, it would be possible to reconstruct agricultural potential under aboriginal conditions.

To estimate the rate, dy/dt, of such a process we must examine the external factors that influence yields in conjunction with the effects of aboriginal practices. Nye and Greenland (1960:75) outline six causes for the documented decline in production of shifting agricultural systems:

[52]

  1. "Deterioration in the nutrient status of the soil
  2. Deterioration in the physical condition of the soil
  3. Erosion of the top soil
  4. Changes in the numbers and composition of the soil fauna and flora
  5. Increase of weeds
  6. Multiplication of pests and diseases."
All of these are consequences of agricultural practices. Weather and climatic trends can be added as a seventh stochastic factor independent of technology. From a modeling perspective, by examining the causal agents of soil depletion we can better isolate the essential dynamic elements as functions of time.

Soil Depletion

The deterioration of nutrients and the physical condition of the soil, top soil erosion, and loss of soil fauna and flora combine to reduce the viability of the growing medium. Nitrogen (N), phosphorous (P), and potassium (K) are the three critical elements absorbed from the soil by maize (Larson and Hanway 1977:634). Deficiencies in these elements generally lead to a decreased growth rate and stunting. Nitrogen deficiency results in barren ears and stunted kernels. Phosphorus deficiency can minimize successful pollination by delaying silking. Potassium-deficient plants tend to produce small and poorly filled ears as well as weak stalks succeptable to rot (Larson and Hanway 1977:635-636).

[53]

Absorption of N is dependent on the processes of oxidation (NO3) and reduction (NH4) largely as a result of nitrifying bacteria (Gardner et al. 1985:110). Because these are biological processes, they are easily affected by temperature, moisture, and soil pH. Nitrification is minimal during the cold, wet months of winter and spring and optimal in well aerated soils when the temperature exceeds 25°C. Denitrification becomes a problem under warm, waterlogged conditions and during leaching when the soil is well aerated. Late-successional vegetation tends to produce nitrification inhibitors (tannins and phenols) which are slowly removed by leaching during cultivation.

Phosphorus is represented in both organic and inorganic portions of the soil matrix. Most P absorption is dependent on the element being in solution, which accounts for the smallest share of soil P. Although the concentration of soluble P can be extremely low, root action results in plant levels up to 1000 times that of the surrounding soil. Thus, plants can quickly incorporate most of the available P (Gardner et al. 1985:115-116).

Potassium is primarily derived from minerals, especially clay minerals (such as montmorillonite). Although only about 1% to 3% of the total K in soil is available through exchange or solution, most soils are sufficiently buffered to sustain constant levels from year to year. Like N, potassium absorption is optimal at 25°C (Gardner et al. 1985: 117-118).

Rates of nutrient uptake vary according to the growth stage of the plant. Potassium absorption usually is complete by the time of silking, while N and P continue to be incorporated until the plant is almost

[54]

mature. Through the process of translocation, N and P are largely (66.7% to 75%) concentrated in the grain by harvest time. Potassium, on the other hand, tends (75%) to remain in the leaves and stalk (Larson and Hanway 1977:634-635). Using a standardized 100 quintals/ha (159.3 bu/acre) we can expect 200 kg of N, 36 kg of P, and 190 kg of K to be deposited in the grain and stover (bulk plant remains) of modern corn belt varieties (Larson and Hanway 1977:634-635).

Erosion of exposed top soil is another side effect of agricultural development. Our concern is with its increased rate over that under late-successional vegetation (Nye and Greenland 1960:85). Its effect is influenced by clearing practices, slope, and cultivation techniques. The practice of ridging or hilling the soil produces a "cap" which promotes runoff and impedes oxygen absorption (Nye and Greenland 1960:82). Every doubling of slope will result in 2.5 times the erosion per unit area (Symons 1978:55). Today's mechanized farming practices are optimal on surfaces with a 0.5 to 3.0 degree slope (less than a 5% gradient).

An increase in the proportion of weeds found in fields and the proliferation of pests and diseases are encouraged by monoculture practices. The long term agricultural experiments at Rothamsted, England (Hall 1917:154) demonstrated that continuous cropping of unfertilized fields resulted in an increase in the proportion of weedy plants over grasses and clovers. Such competition from plants adapted to poor soil conditions would further restrict the flow of soil nutrients to cultivated plants.

Maize damage by soil insects (rootworms, cutworms, wireworms, and billbugs) and surface insects (earworms, aphids, borers, grasshoppers,

[55]

and beetles) can be extensive (Dicke 1977). Non-chemical inhibitors of such pests include deep plowing, short rotations, good drainage, early planting, and clean cultivation. Storage pests, with their capacity to survive southeastern winters, can extend the period of grain destruction beyond the harvest.

Disease-induced losses of 2% to 7% are considered normal today (Ullstrup 1977). Occasional widespread epidemics, however, can destroy significant portions of a crop. They include rots (seed, stalk, ear, and root), blights (leaves), wilts, mildews, smuts, rusts, viral and mycoplasma diseases, among others. The spread of disease is regulated by temperature, moisture, host resistance, and the form of the agent (fungal, bacterial, or viral). Traditional options to control the outbreak of maize diseases include maintenance of soil fertility (minimizing stalk rots), plant rotation (periodic removal of the host), and field sanitation (removal of host debris) (Bowman and Crossley 1911).

Of the various practices outlined in Chapter II, three directly affect productive potential of the system: 1. land was cleared using fire to burn off all plant debris, 2. cultivation consisted of only two spot hoeings, and 3. no form of fertilizer was added to the fields.

The use of fire to remove plant debris following the initial clearing and field preparation in subsequent years influences the nutrient balance in both positive and negative ways. On the positive side, the ash, if incorporated into the soil, will serve to raise the pH. This would be beneficial towards the growth of nitrogen fixing bacteria. It would also return some non-volatile elements to the soil. However,

[56]

unless the ash is plowed into the soil most will be washed away during subsequent rainstorms. Fire would also help sanitize the field minimizing future outbreaks of insect infestations and disease. Negatively, fire releases approximately 96% of the volatile nitrogen and 54% of the potassium stored in plant remains (Arianoutsou and Margaris 1981). Approximately 1.97 kgN/ha will be stored in every quintal/ha of produced maize (Gardner et al. 1985:107). The heat from the fire will also destroy nitrogen fixing organisms in the upper layers of the soil. The combined effect of this process and the removal of most of the plant's stored nitrogen (located in the grain) at harvest would result in a decrease in N availability.

Cultivation practices would also affect the maintenance of yield levels. The minimal hoeing schedule would result in a weed cover which would minimize the erosional effects. The increased competition between plants for water and nutrients would, however, increase. These plants serve as nutrient sinks capable of holding between 36 and 73 kgN/ha and up to 40 kgK/ha (Arianoutsou and Margaris 1981:345). Burning these plant residues would effectively remove two or more times the nutrients stored in the maize alone. Weed growth would also be conducive to insect propagation. The end result is, as Butzer (1982:148) notes, that spot hoeing lowers yield.

The failure to replenish soil nutrients under the aboriginal system is perhaps the most far reaching of all the cultural factors. Sustained cropping is a non-renewable process. Harvesting just the grain removes most of the nitrogen and phosphorus absorbed during the growing period. If we also remove or burn the stover and weeds, more than two times the

[57]

nutrients are taken out of the cycle. Destruction of nitrogen fixing bacteria within the upper soil zones decreases the ability of the field to replenish this deficit. We noted little or no support for either the use of manures or systematic fallowing during the contact period. Corrective measures suitable for extending the productive life of the soil appear to have been extremely limited for prehistoric cultures.

In this regard, the mistaken belief that legumes were used to supply nitrogen to growing maize plants needs to be corrected. These plants have the capability to develop a symbiotic relationship between their root systems and nitrogen fixing bacteria present in the soil. The bacteria concentrate in root nodules and provide nitrogen to their hosts. This frees the beans and clovers from absorbing the nutrient from the soil. Unless the plant is incorporated into the soil while green (as a so-called green manure) the nitrogen will be lost to the harvest and subsequent burning. The fact that the soil's surplus N is not used by these plants allows them to be planted among other N consumers. The aboriginal practice of sowing beans with maize was adaptive because it minimized field size and provided a support (the maize stalk) for the climbing legume. The practice does not provide any nutrient value to the maize plants (Gardner et al. 1985:133; Russell 1973:359). Conversely, it has been shown (Bowman and Crossley 1911:97) that growing cow peas between the rows can reduce corn yields by as much as eight to ten bu/acre.

Similarly, the widespread assumption that the use-life of low river terrace soils is replenished by flooding ignores three points. First, waterlogged conditions encourage denitrification processes. Second,

[58]

deposited silts, although highly tillable under hoe technology, will not necessarily be nitrogen rich. Finally, spring floods will tend to occur after the maize has been planted, therefore increasing the local risk of crop failure.

What are the essential parameters affecting soil degradation? The foremost is obviously nitrogen depletion. Weed and insect increases, along with disease epidemics, will tend to exacerbate reduced yields at the end of the viable life span of a field largely as a result of nutrient deficiencies. In the absence of systematic rotation, fields would produce until some point when their yield per unit labor is insufficient to support their needs and field abandonment would occur. For modeling purposes, the use-life of a field can therefore be directly related to the nutrient depletion curve. The quantification of this process is developed later.

Climate

The seventh influencing factor, weather, is external to the behavioral options. Varying weather patterns would significantly alter agricultural potential if their long term trends (climatic changes) substantially changed the probability of crop failure. Crop failures become archaeologically significant only when they occur frequently. Therefore our consideration of climatic effects is in terms of the fluctuating probability of crop failure over time.

The two most important climatic variables for maize production are water infiltration and temperature (Symons 1978:21-22). Moisture requirements for maize vary with the growth stage. During the first two

[59]

months, growth occurs at a slow pace and moisture needs are minimal. Roughly between the middle of June and the middle of July the rate of growth is greatest (+15 days from tasseling). Approximately two weeks before the milky stage is reached, the plant will obtain its maximum weight. From that point on the plant will slowly decrease in weight. The need for moisture is therefore greatest during the 30 days of rapid growth (Azzi 1956:58-59; Larson and Hanway 1977:629).

Temperature plays a vital role in determining the growth rate of maize. Under low temperatures above the critical temperature (9°C for maize), growth is slow (Symons 1978:27). The optimal range for growth is 18_ to 25°C. A study (Palmer 1973) of seven Mexican races demonstrated that the lower the minimum temperature, the longer it took for plants to flower. Grain weight was shown to increase with earlier plantings given a sufficiently high minimum temperature. As a result, May 21 plantings produced better yields than those planted 30 days earlier and later.

We lack sufficient climatic data to incorporate the effect of the "little Ice Age" on Mississippian development (cf. Parry 1975), yet the potential for increased crop failure probabilities late in the period cannot be dismissed. Although very little climatic data exist for the late Holocene in the Southeast, preliminary attempts (e.g Hall 1982; Swain 1978) to characterize climatic conditions elsewhere during this period do suggest long term, gross fluctuations in moisture potentials. Knowing that conditions varied substantially in the Southwest, the Southern Plains, and the Northeast would mean that similar climatic influences could have affected Mississippian agriculture in the South

[60]

east. If there was a deterioration below that of the so-called "little optimum" of A.D. 1000-1200 (Butzer 1982:24), it probably occurred after A.D. 1400. Following Parry (1975), this could have resulted in an increased frequency of crop failure from 0.05 in A.D. 1200 to 0.3-0.4 by A.D. 1600.

The following discussion will formulate a functional representation of nutrient depletion in conjunction with stochastic crop failure potentials.

Yield as a Function of Time

Specific maize yields for any given year are dependent on soil quality, climate, and cultivation practices. Cultivation options can be empirically isolated and used to predict yield potentials. Climate is a stochastic variable that may produce trends in crop failure frequencies at different time periods during the Mississippian. We have observed that soil quality, or its ability to maintain a nutrient base suitable for agriculture, decreases over time. To model this decrease as a function of time requires data on the rate at which it occurs. Studies were carried out late in the nineteenth century by various agricultural research organizations that provide data suitable for addressing this question.

The Rothamsted Experiments (Hall 1917) began in 1843 using half acre plots to test the effects of various cultivation practices over long periods of time. Although maize was not included in this English study of farming practices, many of their observations are useful here. First, they demonstrated that continuous cropping does significantly

[61]

reduce yields over a 20 to 30 year period. They also documented that the characteristics of each plant species differentially affects the rate of this decrease. Barley, with a short root system, depleted soil faster than wheat with its longer roots (Hall 1917:72). (Maize root systems can extend two meters, although most of the system is concentrated near the surface). Plant rotation and manuring was shown to help prevent the depletion process, especially if legumes (clovers, beans, and peas) were used once every four to seven years (Hall 1917:133). Finally, their data indicate that yields tend to stabilize at some minimum level rather than plunging to zero.

One study of maize and its effect on soil quality involved a three year examination of nitrogen depletion in soils from Kansas, Virginia, and California (Wright 1920) (the Virginia data appear to contain typographical errors for the second year and have not been used in this analysis). Continuous cropping in buckets showed that the ratio of dry plant weights annually decreased along the lines of 1.0:0.54:0.35 for the Kansas soil and 1.0:0.81:0.5 for the soil from California over the three year period. Measurement of nitrate content each year revealed that nitrogen reduction was proportional to the amount of plant growth the previous year.

These examinations qualitatively describe the effects of soil depletion on plant growth. The most useful quantitative data available comes from studies of actual field plots where maize was continuously grown under various conditions for several years. Such a study was carried out on a silt loam in Wooster, Ohio between the years 1893 and 1913 (Weir 1926). The purpose of the experiment was to demonstrate the

[62]

value of a five year rotation of corn, oats, wheat, clover, and timothy. Thirty 0.1 acre plots were planted under various cultivation circumstances. Our interest is in those plots where maize was continuously cropped with (n=6) and without (n=4) fertilizer. If we assume that the yields from the fertilized plots represent the maximum potential yield each year under variable environmental restraints, then we can estimate that the expected yield for the unfertilized plots should be equally proportional to the first year's yield. That is, the expected yield of the unfertilized plots in year t [Ut] should be:

Ut = U0(Ft/F0)   (3.4)

where Ft is the fertilized yield for year t. The difference between the observed unfertilized yields and the expected value should be approximately equal to the effect of soil depletion on the crop. These data are presented in Table 4. The mean yield over 20 years for the fertilized plots was 37.439 bu/acre (sd = 12.414) (23.5/7.8 quintals/ha) and for the unfertilized test units it was 18.136 bu/acre (sd = 11.431) (11.4/7.2 quintals/ha).

In all but the second year, the unfertilized plots consistently produced below the expected. The cumulative ratio represents the reduction of yield from that expected over a 20 year period. The cumulative ratio of unfertilized to expected yields shows a gradual asymptotic leveling off at just under 60% of the expected accumulated yield.

At some point during this period, the decreasing yield would reach a level unsuitable for further exploitation. If, for example, the minimum requirement is 18 bu/acre for a given technology with some

[63]

Table 4. Continuous cropping data from Wooster, Ohio (adapted from Weir 1926:32).

Year Fertilized
(bu)
Unfertilized
(bu)
Expected
(bu)
Cumulative
Ratio

1894 22.16 18.41 18.41 1.0000
1895 37.09 32.26 30.81 1.0294
1896 70.57 52.05 58.63 0.9524
1897 26.23 11.91 21.79 0.8842
1898 52.61 30.56 43.71 0.8376
1899 40.32 20.95 33.50 0.8032
1900 48.06 26.38 39.93 0.7802
1901 49.82 26.46 41.39 0.7599
1902 45.40 14.86 37.72 0.7176
1903 32.14 8.02 26.70 0.6860
1904 24.21 4.24 20.11 0.6603
1905 45.57 20.73 37.86 0.6499
1906 41.69 21.59 34.63 0.6479
1907 26.83 6.22 22.29 0.6303
1908 27.97 12.66 23.24 0.6262
1909 30.61 10.23 25.43 0.6152
1910 20.75 6.26 17.24 0.6071
1911 41.00 15.80 34.06 0.5985
1912 37.02 12.60 30.75 0.5888
1913 28.73 10.52 23.87 0.5831

[64]

maximum allowable field size then 11 out of 20 years would have been considered failures. The failure rate for the first 13 years was 0.31 and for the last seven years it was total. If we apply the same rule to the fertilized plots, even with negative climatic impacts, there would not be any crop failures. Clearly, soil depletion is the primary limiting factor for sustained land use.

Denitrification is a complex process (Burford et al. 1978), but for our purposes a simple model can be developed that allows us to predict the use-life of fields, if we accept certain basic assumptions. First we must accept the data in Table 4 as representative of the process. We recognize that there are measurement errors inherent in the results reflecting various external impacts (e.g. germination rates, insect damage, disease, etc.) but these are real factors equally pertinent to our study. We also realize that yield is correlated with the previous production of a field. Thus a poor yield in year t could be followed by a proportionally larger than expected yield in year t+1 simply because the demands during year t were minimal (see years 1907 and 1908 in Table 4). However, such an effect will tend to develop after the field has exceeded its practical use-life. Its impact would therefore not be quantitatively significant under an aboriginal system where the field would have been abandoned earlier. If we accept these conditions we can estimate Y(t)'s contribution to (3.1).

We begin by noting that the expected value of the cumulative ratio fits the curve:

ft = 1.066t-.1989 t > 0   (3.5)

[65]

shown in Figure 2. If we simulate expected yields [Ex ] dependent on annual weather conditions we can estimate depleted yields [Y't ] as:

Thus a stochastically varying yield pattern can be generated reflecting both fluctuating climatic influences and soil degradation.

As an example of the usefulness of (3.6), if the expected yield was a constant 30 bu/acre (no climatic variation) the depleted yield would reach 18 bu/acre after six years (Figure 3). This function can be approximated by:

Yt = 27.78t-.2249 (bu/acre) t > 0   (3.7)
Yt = 30.0 (bu/acre) t = 0  

Soil depletion lowers the maximum potential yield, exacerbating the effects of climatic perturbations. Restated, if the maximum possible yield is 30 bu/acre then (3.7) represents the maximum possible yield under continuous cropping. Fluctuating external constraints would result in an average yield below this curve.

If we accept the fertilized yields of Table 4 as representative of reasonable ranges of yield variability excluding soil depletion influences, it is possible to use these data to estimate average undepleted yields under aboriginal conditions. The highest yield was 70.57 bu/acre in 1896. By reducing each value by a factor of 30.0/70.57 (0.4251) we can transform Table 4 to approximate aboriginal yields. The resultant average yield [Et ] would have been 15.92 bu/acre (sd = 5.28) (10.0/3.3 quintals/ha). Therefore, we can be reasonably certain that mean aboriginal yields would have been on the order of 16 bu/acre (10 quintals/ha)

[66]

Figure 2. Cumulative ratio of depleted to undepleted yields

[67]

Figure 3. Depleted yield curve

[68]

under environmental conditions similar to those of the early twentieth century. Soil depletion effects would lower this value each year according to:

Et = (15.92/30.0)Yt (bu/acre) t > 0 (3.8)

with a standard deviation of:

st = (5.28/30.0)Yt (bu/acre) t > 0 (3.9)

This technique can be extended to adjust for different soil types by rescaling the curves to correspond with the specific maximum potential yield.

These equations assume cultivated conditions. The Mississippian practices of allowing fields to become overgrown and burning all residues force us to consider the effect of weed growth on the depletion curve. As noted above, two to three times as much nitrogen may be stored in the weeds and maize compared with that found just in the maize. Conservatively, this is equivalent to doubling the impact of maize production. To approximate this effect, we can adjust (3.8) and (3.9) by substituting 2t for t (or by multiplying each by 0.8557). This will double the rate of depletion and more accurately account for this specific impact.

Modeling the process of soil recovery under temperate conditions following intensive agricultural use is a more difficult problem. Heidenreich (1971:190) estimated that over 60 years would be required on sandy soils. Sandy loams were expected to replenish themselves after 35 years. Green (1980a:224) expects a 60 to 85 year period would be

[69]

required to return an agriculturally disturbed area to a mature secondary stand. His estimate is based on Liken's (1978) research. However, the Rothamsted experiments suggest that "old arable soils" would require between 100 and 150 years to raise the nitrogen level from a nearly depleted 0.11% to a grassland level of 0.25% (Russell 1973:324). Such plots were referred to as "old fields" by the Cherokee and were distinguished by plants adapted to depleted conditions (especially wild strawberries [Adair 1930:439]). Based on these observations, a 125 year recovery period or "fallow" would seem reasonable for temperate forest environments, after which the land could be reused.

Per Capita Consumption

Estimating the average annual consumption per individual, n, involves contrasting the amount of land that can be farmed under finite yield potentials against the amount of maize required. The ethnohistoric accounts indicate the average individual would utilize 0.33 to 1.5 acres and a family would plant from 1.0 to 4.0 acres (0.4 to 1.6 ha) of maize.

The required amount of maize is largely dependent upon its importance in the diet. Most researchers (Bennett 1956; Minnis 1985; Thomas 1976; and others) base their estimates of maize consumption on the average caloric needs of an individual per day. This assumes that no chronic periods of nutritional deprivation occurred and that we know the relative contribution of maize to the diet. Faced with these limitations, it would be more appropriate to use such approximations as upper limits of maize consumption and contrast these values with expected

[70]

yields from observed field sizes. Bennett (1956:392) and Thomas (1976:14) suggest that roughly 65% of the eastern aboriginal diet consisted of maize and that the average individual required 2500 calories/day. This relative importance figure is well within the upper bounds of Lynott et al.'s (1986:61) Mississippian estimate of a 35% to 72% cline after A.D. 1000 based on isotopic data from human skeletal remains.

Minnis (1985:11) states that maize contains 3600 calories/kg. This means that each individual would require, for consumption purposes, roughly 6.47 bu/year or 1.64 quintals/year (for 65% of their caloric needs or 0.025 quintals/percentage dependence). Using the yield range of 7.5 to 30 bu/acre (4.7 to 18.8 quintals/ha), this would mean 0.21 to 0.87 acres (0.08 to 0.35 ha) would be needed per person per year. Based on 1848 data (United States Commissioner of Patents 1848:130) a standard acre planted following the 3.5 ft spacing with three kernels/hill would require 5.33 qt (0.17 bu or 0.04 quintals) of seed, which would add a minimal amount to the production needs. Therefore a family of five would be expected to need approximately 1.1 to 4.4 acres (0.45 to 1.78 ha) to produce one year's consumption assuming no spoilage. These independently derived values are in line with those given in the ethnohistoric accounts.

Given the low production capacity of Basketmaker maize, it is not surprising that Emergent Mississippian consumption was measured at 35%. With optimal yields of only 7.5 bu/acre, 0.87 acres/person (4.7 quintals/ha and 0.35 ha/person) would be the minimum field size needed to provide 65% of an individual's caloric needs. Historically, it is

[71]

unlikely that more than one acre per person could have been cultivated. Yet, expected average yields below 7.5 bu/acre would require field sizes to exceed this limit. A 35% dependence on maize would lower the minimal field size to 0.47 acres/person (0.19 ha/person). The importance of maize would obviously increase as productivity per unit area improved; probably around A.D. 1000 when Northern Flint varieties appeared.

To translate consumption (quintals/person/year) into a nondecreasing function of time, an equation such as:

where t e [900,1700] can be used. The percentage dependence will be:

The plot of this curve (Figure 4) shows a slow rise in dependence both early (A.D. 900 to 1100) and late (A.D. 1500 to 1700) with a sharp increase ca. A.D. 1250. This is a reasonable reproduction of the observed skeletal data (Lynott et al. 1986).

Beyond the annual consumption requirements, we must consider the possibility that there was a need to generate surpluses which, in turn, would require increased planting. To store an extra year's worth of grain the land requirement would have to double. Yet, based on the above calculations a single year's supply of grain already approaches what the accounts have generally indicated to be the yield from fields of maximum size. However, it has always been assumed that accounts describing large amounts of stored grain indicated such a surplus existed. It is difficult to adequately estimate the size of such a

[72]

Figure 4. Percentage dependence on maize as a function of time

[73]

surplus strictly in terms of need. An alternative approach would be to consider the ability of aboriginal technology to store maize for extended periods. Inability to store the product would necessarily preclude the value of producing it.

To maintain the integrity of a maize crop under storage conditions one must stay within certain well-defined environmental limits. Optimally, maize should be stored at average temperature and humidity with sufficient air circulation (Bowmen and Crossley 1911:106-115). Specifically, shelled seed corn must be maintained below 10°C (50_ F) at 45% to 55% relative humidity to maintain germination viability (Craig 1977:710-711). Seed moisture levels greater than 21% are considered high and at temperatures below freezing can reduce germination rates 40% to 70% (Bowman and Crossley 1911:116). Today such moisture levels require artificial drying. Maize at 17% to 22% moisture can be stored in bins with air circulation if the temperature is less than 10°C. Under warm summer conditions, the grain would have to maintain moisture levels of less than 13% or 14% in order to minimize spoilage due to molds and insect infestations (Dicke 1977:561; Larson and Hanway 1977:662; Ullstrup 1977:420). Shelled maize, when kept confined, will have a tendency to "heat", reducing the germination rate and leading to rapid spoilage. Once such destruction starts, it will tend to spread quickly. Spoilage results in loss of seed, lowered nutritional values, and the potential production of mycotoxins (Ullstrup 1977:420). In light of aboriginal storage technology we cannot expect large amounts of grain to have been kept for periods in excess of one year.

[74]

The practice of drying ears over the winter in the rafters of structures would be optimal in a Mississippian setting but the amount of space required for surplus yields would seem to exceed that which would be available. A family of five, with a harvest of 6.47 bu/person (1.64 quintals/person), would need 1.14 cubic meters (1140 l) of storage space for shelled grain. If a bushel of maize in 1846 contained approximately 64,000 seeds (United States Commissioner of Patents 1847:130) and an ear produced roughly 100 to 400 grains (following some accounts), this family would have from 160 to 640 ears to store. If the grain were stored in cribs little would be expected to survive the following hot, humid, southeastern summer. Storage pits could not be ventilated and moisture levels would be too high for long term confinement.

It would seem unlikely that production levels beyond those defined above would ultimately be usable over a long period by a Mississippian population. If surpluses were produced they would have to be short term, i.e. to be used between the fall harvest and the following summer. Such a surplus, like that placed in the "king's crib", could be redistributed to other areas experiencing crop failures the same year. Excess grain could also be traded to non-agriculturalists, as was the case with the Huron of southwestern Ontario. But it is highly unlikely the grain could have been kept to mitigate the effects of some future crop failure.

Discussion

At this point it is important to restate the above results as boundary conditions for Mississippian agricultural systems:

[75]

  1. Population will be expected to increase at a rate between 0.003 and 0.017 per year.
  2. Individual maize consumption per year is expected to be on the order of 0.025 quintals (0.1 bu) per percent dependence, e.g. an average 65% dependence implies a 1.63 quintals/year (6.5 bu/year) requirement per person. As a function of time, consumption for the Mississippian Period will approximately follow (3.10).
  3. Maximum potential yield under optimal conditions is not expected to greatly exceed 18.8 quintals/ha (30 bu/acre) during the period. Emergent Mississippian yields would not be expected to exceed 4.7 quintals/ha (7.5 bu/acre).
  4. Maximum labor output will not exceed 0.4 ha (1.0 acre) per person.
  5. The expected non-depleted average yield will be 9.99 quintals/ha (sd = 3.31) (15.92/5.28 bu/acre).
  6. Yields are expected to be annually reduced according to equation (3.6).
  7. Allowing for depletion, maximum potential yield is expected to follow equation (3.7).
  8. Time dependent expected yields and their standard deviations, taking into account environmental fluctuations and soil depletion, will follow (3.8) and (3.9), respectively, with the substitution of 2t for t.
  9. The recovery or fallow period will be on the order of 100 to 150 years.

[76]

With these parameters, we can produce a quantitative model of agricultural stability for specific archaeological contexts. Up to this point we have not used any East Tennessee archaeological data (i.e. Mississippian observations) to define any aspect of the model. This is in keeping with the necessary constraint of separating observational data from testing situations. We are now ready to test the model by applying it to an archaeological study area. Before developing the proposed East Tennessee example, it would be useful to present a simplified demonstration of the model's use on a less complex set of data. Muller (1978:287-288) provides an appropriate description of the agricultural setting at the Kincaid site in southern Illinois.

Muller's analysis concludes that Kincaid was composed of 400 individuals each requiring 0.4 ha of arable land from an available, total reservoir (R ) of 621 ha. He concludes that 1500 people could be 0supported at this Mississippian site. Assuming climatic effects follow a n(Et,st ) distribution, we can produce probability measures of yearly yields. Thus, if n is 1.64 quintals/person (6.47 bu/person) and no more than 0.4 ha per person are planted per year the probability of "crop failure" will be:

where x is n/(maximum field size per person), the minimum required yield. This curve is shown in Figure 5 for x equaling 4.06 quintals/ha (15.834 bu/acre). Given a population of 400 individuals at t=0 and 621 ha of land, we can perform a discrete simulation of a 300 year period in the following manner:

[77]

Figure 5. Probability of crop failure over time

[78]

  1. Set:
    1. n = 1.64 quintals/person n
    2. maximum field size = 0.4 ha/person
    3. x = 4.1 quintals/ha x
    4. soil recovery period = 125 years
  2. Define each year's conditions:
    1. determine the expected yield based on (3.8) adjusted for quintals/ha
    2. use enough land to produce n quintals/person
    3. simulate random fluctuations in yield following (3.12)
    4. arbitrarily remove the field from use after the third crop failure (harvest < x) (return it after 125 years)
    5. if the amount of land available for new fields is less than nP(t) then reduce the population (e.g. through migration) to Rt /n
    6. if no land is available abandon the site for 125 years at which time conditions can return to that of t=0
    7. use r = 0.01 with (3.2) to simulate population growth
  3. Repeat step 2 for a period of 300 years
  4. Chart the following variables over time:
    1. Population (Figure 6)
    2. Harvest (Figure 7)
    3. Proportion of land used to total land available (Figure 8)

After 51 years all available land would be in a depleted state. At t = 176 the site could be repopulated (in this case by 400 people). Because of the periodic nature of site repopulation, total depletion

[79]

Figure 6. Predicted population curve for Kincaid

[80]

Figure 7. Predicted harvest curve for Kincaid

[81]

Figure 8. Total amount of available land at Kincaid

[82]

would again occur at t = 227 (a cycle of 51 years). Population adjustments occurred at t = 41 and 215. During the 300 year period the average planting duration was 12.6 years per field. The average surplus per person was 0.72 quintals.

Although highly generalized, this example demonstrates that soil depletion would seriously inhibit growth at Kincaid under Muller's terms. Such a level of population density could not be supported for more than a few generations without major adjustments.

Carneiro (1960:82) would argue that the maximum sustainable population (carrying capacity) of such a site should be :

            621  ha
        --------------- 12.6 yr
        (125 + 12.6) yr
    ---------------------------   =  142.2 people
        0.4 ha/person
     
(3.13)

If we re-simulate the above conditions without using (3.2), maintaining a zero population growth at P0 = 142, the fluctuating amount of available and (Rt) is shown in Figure 9. Because of soil depletion, population reductions would occur at t = 126 (Pt drops to 133). Total abandonment would occur at t = 134. This demonstrates that failure to recognize the negative effects of agriculture invalidates the usefulness of (3.13).

We can now turn to a more detailed application. Using the system definition outlined above, the potential of East Tennessee's Little Tennessee River Valley will be calculated and compared to the archaeological record. Specific input parameters like soil variability, population levels, and culture changes will be used to specify the appropriate model parameters for this setting.

[83]

Figure 9. Total amount of available land at Kincaid for a stationary population of 142

[84]