FACULTY
PUBLICATIONS
ASTRONOMY
Defining the Leadership of Discovery: Discovery in
Astronomy
Karen B. Kwitter, Ebenezer Fitch Professor of
Astronomy
Chapter in Jepson Studies in Leadership series, “Leadership
and Discovery” ed. George R. Goethals & Thomas J. Wren, vol. 2,
(Palgrave Macmillan) (2009)
In this chapter, Kwitter examines the history of leadership in astronomy
and its transition from a solitary endeavor to the current state where sizeable
collaborations dominate. She also discusses the various styles in which such
leadership can be expressed.
Alpha Element Abundances in a Large Sample of Galactic
Planetary Nebulae
Jacqueline B. Milingo, Karen B. Kwitter, Ebenezer Fitch
Professor of Astronomy, Richard B.C. Henry, & Steven P. Souza,
Lecturer/Observatory Supervisor
Astrophysical Journal, 711, 619-630 (2010)
In this paper, we present emission line strengths, abundances, and element
ratios (X/O for Ne, S, Cl, and Ar) for a sample of
38 Galactic disk planetary nebulae (PNe) consisting primarily of Peimbert
classification Type I. Spectrophotometry for these
PNe incorporates an extended optical/near-IR range of
λλ3600–9600 Å including
the [SIII] lines at 9069 Å and 9532 Å, setting this
relatively large sample apart from typical spectral
coverage. We have utilized Emission Line Spectrum
Analyzer, a five-level atom abundance routine, to
determine Te ,
Ne , ionization
correction factors, and total element abundances, thereby continuing our work
toward a uniformly processed set of data. With a
compilation of data from >120 Milky Way PNe, we
present results from our most recent analysis of
abundance patterns in Galactic disk PNe. With a wide range of metallicities,
galactocentric distances, and both Type I and
non-Type I objects, we have examined the alpha elements against
H II
regions and blue compact galaxies (H2BCGs)
to discern signatures of depletion or enhancement in PNe progenitor
stars, particularly the destruction or production
of O and Ne. We present evidence that many PNe have higher
Ne/O and lower Ar/Ne ratios compared to H2BCGs
within the range of 8.5–9.0 for 12 + log(O/H). This
suggests that Ne is being synthesized in the low-
and intermediate-mass progenitors. Sulfur abundances in PNe
continue to show great scatter and are
systematically lower than those found in H2BCG at a given metallicity.
Although we find that PNe do show some distinction
in alpha elements when compared to H2BCG, within the
Peimbert classification types studied, PNe do not
show significant differences in alpha elements amongst themselves,
at least to an extent that would distinguish in
situ nucleosynthesis from the observed dispersion in abundance
ratios.
The 2008 August 1 Eclipse Solar-Minimum Corona
Unraveled
Pasachoff, Jay M., Vojtech Rusin, Miloslav Druckmüller,
Peter Aniol, Metod Saniga, Milan Minarovjech
Astrophys.J., 702, 1297-1308
We discuss results stemming from observations of the white-light and [Fe
XIV] emission corona during the total eclipse of the Sun of 2008 August 1, in
Mongolia (Altaj region) and in Russia (Akademgorodok, Novosibirsk, Siberia).
Corresponding to the current extreme solar minimum, the white-light corona,
visible up to 20 solar radii, was of a transient type with well-pronounced
helmet streamers situated above a chain of prominences at position angles
48°, 130°, 241° and 322°. A variety of coronal holes,
filled with a number of thin polar plumes, were seen around the poles.
Furthering an original method of image processing, stars up to 12 magnitude, a
Kreutz-group comet (C/2008 O1), and a coronal mass ejection (CME) were also
detected, with the smallest resolvable structures being of, and at some places
even less than, 1 arcsec. Differences, presumably motions, in the corona and
prominences are seen even with the 19-min time difference between our sites. In
addition to the high-resolution coronal images, which show the continuum corona
(K-corona) that results from electron scattering of photospheric light, images
of the overlapping green-emission-line (530.3 nm, [Fe XIV]) corona were obtained
with the help of two narrow-passband filters (centered on the line itself and
for the continuum in the vicinity of 529.1 nm, respectively), each with FWHM of
0.15 nm. Through solar observations, on whose scheduling and details we
consulted, with the Solar and Heliospheric Observatory, Hinode’s XRT and
SOT, TRACE, and STEREO, as well as Wilcox Solar Observatory and SOHO/MDI
magnetograms, we set our eclipse observations in the context of the current
unusually low and prolonged solar minimum.
Limb Spicules from the Ground and from Space
Pasachoff, Jay M., William A. Jacobson, and Alphonse C.
Sterling
Solar Phys., 260, #1, 59-82
We amassed statistics for quiet-sun chromosphere spicules at the limb using
ground-based observations from the Swedish 1-m Solar Telescope on La Palma and
simultaneously from NASA’s Transition Region and Coronal Explorer (TRACE)
spacecraft. The observations were obtained in July 2006. With the 0.2
arcsecond resolution obtained after maximizing the ground-based resolution with
the Multi-Object Multi-Frame Blind Deconvolution (MOMFBD) program, we obtained
specific statistics for sizes and motions of over two dozen individual spicules,
based on movies compiled at 50-second cadence for the series of five wavelengths
observed in a very narrow band at H-alpha, on-band and in the red and blue wings
at 0.035 nm and 0.070 nm (10 s at each wavelength) using the SOUP filter, and
had simultaneous observations in the 160 nm EUV continuum from TRACE. The
MOMFBD restoration also automatically aligned the images, facilitating the
making of Dopplergrams at each off-band pair. We studied 40 H-alpha spicules,
and 14 EUV spicules that overlapped H-alpha spicules; we found that their
dynamical and morphological properties fit into the framework of several
previous studies. From a preliminary comparison with spicule theories, our
observations are consistent with a reconnection mechanism for spicule
generation, and with UV spicules being a sheath region surrounding the H-alpha
spicules.
A Comparison of the Red and Green Coronal Line
Intensities at the 29 March 2006 and the
1 August 2008 Total Solar
Eclipses: Considerations of the Temperature of the Solar Corona
Voulgaris, A., T. Athanasiadis, J. H. Seiradakis, and J. M.
Pasachoff
Solar Physics 264, #1, June 2010
During the total solar eclipse at Akademgorodok, Siberia, Russia, on 1
August 2008, we imaged the flash spectrum with a slitless spectrograph. We have
spectroscopically determined the duration of totality, the epoch of the 2nd and
3rd contacts and the duration of the flash spectrum. Here we compare the 2008
flash spectra with those that we similarly obtained from the total solar eclipse
of 29 March 2006, at Kastellorizo, Greece. Any changes of the intensity of the
coronal emission lines, in particularly those of [Fe X] and [Fe XIV], could give
us valuable information about the temperature of the corona. The results show
that the ionization state of the corona, as manifested especially by the [FeXIV]
emission line, was much weaker during the 2008 eclipse, indicating that
following the long, inactive period during the solar minimum, there was a drop
in the overall temperature of the solar corona.
Size and Albedo of Kuiper Belt Object 55636 from a
Stellar Occultation
Elliot, J. L., M. J. Person, C.A. Zuluaga, A.S. Bosh, E.R.
Adams, T.C. Brothers, A. A. S. Gulbis, S. E. Levine, M. Lockhart, A. M. Zangari,
B. A. Babcock, K. DuPré, J. M. Pasachoff, S. P. Souza, W. Rosing, N.
Secrest, L. Bright, E. W. Dunham, S. S. Sheppard, M. Kakkala, T. Tilleman, B.
Berger, J. W. Briggs, G. Jacobson, P. Valleli, B. Volz, S. Rapoport, R. Hart, M.
Brucker, R. Michel, A. Mattingly, L. Zambranno-Marin, A. W. Meyer, J. Wolf, E.
V. Ryan, W. H. Ryan, K. Morzinsky, B. Grigsby, J. Brimacombe, D. Ragozzine, H.
G. Montano, and A. Gilmore
Nature, June 17, 2010
The Kuiper belt is a collection of dwarf planets and other small bodies
that lie beyond the orbit of Neptune. Believed to have formed contemporaneously
with the planets (or soon thereafter), Kuiper belt objects (KBOs) offer clues,
through their spatial distribution and physical properties, to conditions during
the formation of the solar system. Unfortunately, due to their small size and
great instance, few methods allow detailed investigations of these frigid
bodies. Here we report the first multi-chord observations of a KBO stellar
occultation, which occurred on 2009 October 9 (UT). We find that the KBO 55636
(2002 TX300), which is a member of the water-ice rich Haumea KBO collisional
family, has a mean radius of 143 ± 5 km (circular solution). Allowing for
possible elliptical shapes we find a geometric albedo 0.88 (+0.06, –0.14)
in the V photometric band, which firmly establishes that 55636 is smaller than
previously thought and, like its parent body Haumea, is among the most highly
reflective objects in the Solar System. Dynamical calculations by two groups
indicate that the collision that created 55636 occurred at least 1 Gyr ago,
which raises the question of how such a small, ancient body can have a highly
reflective surface.
The MIT Program for Predicting Stellar Occultations by
Kuiper Belt Objects
Elliot, J.L., Zuluaga, C.A., Person, M.J., Adams, E.R.,
Lockhart, M.F., Zangari A.M., Bosh, A.S., Gulbis, A.A.S., Levine, S.E.,
Sheppard, S.S., Dunham, E.W., Bright, L., Souza, S.P., Pasachoff, J.M., Babcock,
B.A., Ryan, W.H., and Ryan, E.V.
Presented at 41st Meeting, Division for Planetary Sciences,
American Astronomical Society, Puerto Rico, October 2009
With observations of a stellar occultation by a Kuiper belt object (KBO)
from multiple stations, one could establish its radius with an accuracy of a few
kilometers. Combining this radius with photometry would establish an accurate
geometric albedo. For those KBOs with orbiting companions, these data will
further provide highly accurate densities constraining material composition.
Stellar occultation data also establish stringent upper limits on any
atmospheres and probe for small, nearby companions. The difficulty in observing
a KBO occultation has been in generating an accurate prediction so that
observers can be deployed within the occultation shadow path. Current KBO
ephemerides are at best accurate to a few tenths of an arcsecond, while angular
radii of the largest bodies are less than 0.02 arcsec. To improve the
ephemerides of the KBOs most promising for stellar occultations, we conduct
astrometric observations of KBOs selected (i) for large angular radii, and (ii)
in sky regions with large star densities. We have made bi-monthly observations
with the Lowell 42-inch Hall telescope since Dec. 2004 and monthly to bi-monthly
observations with the SMARTS 0.9 m at CTIO since May 2005. Approximately 1200
KBO astrometric measurements have been submitted to the Minor Planet Center. We
use these data to establish ephemeris correction models with which we predict
appulses by target KBOs. We observed three of these appulses to test our
accuracy. The difference between the predicted and observed closest approach
agrees within the formal error for two of the three appulses, but the errors are
somewhat larger than the body’s radius. Hence our predictions are almost
accurate enough to reliably place observers within the shadow path of a KBO
occultation, and improving with each astrometric observation. This work is
supported, in part, by USRA subcontract 8500-98-03 (Lowell Observatory) and NASA
Grant NNX07AK73G (MIT).
PICO: Portable Instrument for Capturing
Occultations
M. Lockhart, M.J. Person, J.L. Elliot and S.P. Souza
Publications of the Astronomical Society of the Pacific, submitted
(2010)
We describe a portable imaging photometer for the observation of stellar
occultation events by Kuiper Belt objects (KBOs) and other small bodies. The
system is referred to as the Portable Instrument for Capturing Occultations
(PICO). It is designed to be transportable to remote observing sites by a
single observer. A GPS timing system is used to trigger exposures of a Finger
Lakes Instrumentation ML261E-25 camera to facilitate the combination of
observational results from multiple sites. The system weighs a total of 11 kg
when packed into its single rigid 55.1×35.8×22.6-cm container, meeting
current airline size and weight limits for carry-on baggage. Twelve such
systems have been constructed. Nine systems were deployed for observation of a
stellar occultation by Kuiper Belt object 55636 in October 2009 (Elliot et al.
2010). During the same month, one system was used to record a stellar
occultation by minor planet 762 Pulcova.
BIOLOGY
A Laboratory Exercise for a College-Level, Introductory
Neuroscience Course Demonstrating Effects of Housing Environment on Anxiety and
Psychostimulant Sensitivity
L.M. Pritchard, T.A. Van Kempen, H. Williams, Professor of
Biology, B. Zimmerberg
Journal of Undergraduate Neuroscience Education, 7, 26-32
(2008)
Distinct Startle Responses are Associated With
Neuroanatomical Differences in Pufferfishes
A.K. Greenwood, C.L. Peichel and S.J. Zottoli, Professor of
Biology
Journal Experimental Biology, 213, 613-620 (2010)
CHEMISTRY
Bridged Bi-Aromatic Ligands, Catalysts,
Processes
for Polymerizing and Polymers Therefrom
Thomas R. Boussie, Oliver Brummer, Gary M. Diamond,
Christopher Goh, Assistant Professor of Chemistry,
Anne M. LaPointe,
Margarete K. Leclerc, and James A.W. Shoemaker
U.S. Patent 7, 659, 415 (2010)
New ligands and compositions with bridged bis-aromatic ligands are
disclosed that catalyze the polymerization of monomers into polymers.
These catalysts with metal centers have high performance characteristics,
including higher comonomer incorporation into ethylene/olefin copolymers, where
such olefins are for example, 1-octene, propylene or styrene. The
catalysts also polymerize propylene into isotactic polypropylene.
Effects of PEG and PEGA Polymer Conjugation on Enzymatic
Activity
Karen Chiu ’10, Iris Lee ’09, James W. Lowe Jr.
’09, Matthew T. Limpar ’09,
and Sarah L. Goh, Assistant
Professor of Chemistry
Abstracts of the American Chemical Society, 239, BIOL-80
(2010)
Polymer directed enzyme prodrug therapy (PDEPT), is a novel method of
directing chemotherapy treatment towards tumors. PDEPT takes advantage of the
enhanced permeability and retention effect, whereby macromolecules accumulate in
cancerous tissue due to its leaky vasculature and poor lymphatic drainage
systems. In this way, polymer-enzyme bioconjugates with high molecular weights
(>40 kg/mol) or large diameters (>4 nm) can be selectively taken up and
activate prodrugs within tumors.
Covalently attaching polymers to proteins indubitably affects their
properties, as the protein is subjected to stresses of environmental changes,
and perhaps even conformational changes. Thus, selection of an appropriate
polymer and development of an effective conjugation technique are of paramount
importance. PEG (polyethylene glycol) and its analog polyPEGA (polyethylene
glycol methyl ether acrylate) are two potentially suitable polymers. Not only do
they enhance the molecular weight and diameter on the model enzyme, trypsin, but
certain polymer lengths can also enhance trypsin activity.
Synthesis of Dendritic Trypsin Sensors
Matthew N. Zhou ’12, Alexander N. Beecher
’10,and Sarah L. Goh, Assistant Professor of Chemistry
Abstracts of the American Chemical Society, 239, POLY-73
(2010)
Polymer Preprints, 51, 674-675 (2010)
Typical enzymatic assays for trypsin rely on amide or ester bond cleavage
to produce a single chromophore. Fluorescence based assays generally rely on
fluorophore-quencher or matched donor-acceptor chromophore pairs for
fluorescence resonance energy transfer. In this research, we are looking to take
advantage of the increased sensitivity of fluorescence-based assays over UV, as
well as to simplify the energy transfer system by using a self-quenching dye.
When multiple fluorophores are attached to the same substrate, a decrease in
intensity is observed due to internal energy transfer. In this system,
Rhodamine-B fluorophores are attached via trypsin-cleavable lysine linkers to
dendritic scaffolds derived from 2,2-bis(hydroxymethyl) propionic acid.
Polyethylene glycol methyl ether was used as a water-solubilizing core. Cleavage
of the lysine linkers should release fluorophore molecules and cause a
detectable and corresponding increase in fluorescence activity. Fluorescence
analysis of synthesized boc-protected substrates demonstrates that fluorescent
activity decreases with increasing dendrimer generation.
Modeling Square-Wave Pulse Regulation
Sylvia J. Lou ’10, Annaliese K. Beery ’98, and
Enrique Peacock-López, Professor of Chemistry
Dynamical Systems, 25, 133-143 (2010)
The characteristic hypothalamus square-wave secretion and its regulation of
the pituitary are essential for the optimal functioning of the menstrual cycle.
Here we consider two coupled models, where one mimics the square-wave secretion
and the second, a self-replication chemical system, representing a regulated
subsystem. We analyze the relation of the period and amplitude of the
square-wave oscillations on the secondary oscillator, finding that both period
and amplitude shape the secondary oscillations. Furthermore, a combined slow
change of both period and amplitude are required for biologically suitable
product concentrations of the secondary oscillator.
Switching Induced Oscillations in the Logistic
Map
Makisha P. Maier ’10 and Enrique Peacock-López,
Professor of Chemistry
Physics Letters A, 374, 1028-1032 (2010)
In ecological modeling, seasonality can be represented as a switching
between different environmental conditions. This switching strategy can be
related to the so-called Parrondian games, where the alternation of two losing
games yield a winning game. Hence we can consider two dynamics that, by
themselves, yield undesirable behaviors, but when alternated yield a desirable
oscillatory behavior. In this case, we also consider a noisy switching strategy
and find that the desirable oscillatory behavior prevails.
Regioselectivity of 1,3-Dipolar Cycloaddition to Phenyl
Vinyl Sulfoxide
Tyler C. Gray ’07, Faraj Hasanayn, David P.
Richardson, Professor of Chemistry,
and J. Hodge Markgraf
Journal of Heterocyclic Chemistry, 46, 1318-1323 (2009)
The regioselectivity of the 1,3-dipolar cycloaddition of benzonitrile
N-oxide to phenyl vinyl sulfoxide is established by isotopic labeling and
13C NMR analysis, and by DFT calculations.
ESR Dating at Hominid and Archaeological Sites During the
Pleistocene
Bonnie A.B. Blackwell, Anne R. Skinner, Senior Lecturer,
Joel I.B. Blickstein,
L.V. Golovanova, V.B. Doronichev, and M.R.
Séronie-Vivien
Sourcebook of Paleolithic Transitions: Methods, Theories, and
Interpretations, 93-120 (2009)
In any fossil site, dating the site is essential to understanding the
site's significance, because chronological data permits comparisons with
materials from other sites, and ultimately enables regional settlement patterns,
migration or evolutionary rates to be determined. A dating method's ability to
date significant fossil materials directly rather than just dating associated
sedimentary or rock units adds to its archaeological and paleontological
utility. Electron spin resonance (ESR) dating can provide chronometric ages for
vertebrate teeth throughout the Pleistocene and late Pliocene. For mollusc
shells and coral, ESR's effective dating range spans much of the Pleistocene.
As such, ESR has been used to assess the evolutionary ranges for both hominids
and their associated cultural materials and to solve specific geochronological
problems. We discuss several examples where ESR has been used to date
Neanderthal burials and skeletal materials, including Mezmaiskaya, Obi-Rakhmat,
and Pradayrol, as well as dating for cultural materials from pebble tool
cultures, Mousterian, and Middle Paleolithic sites. In each case, ESR has
provided vital geochronological data, in some cases, where no other methods were
applicable.
Dating and Paleoenvironmental Interpretation of the Late
Pleistocene
Archaeological Deposits at Divje Babe I,
Slovenia
Bonnie A. B. Blackwell, Edwin S. K. Yu, Anne R. Skinner,
Senior Lecturer,
Ivan Turk, Joel I. B. Blickstein, Dragomir Skaberne, Janez
Turk and Beverly Lau
The Mediterranean from 50,000 BP, 179-210 (2009)
New chronological, sedimentary morphometric and diagenetic analyses were
combined to determine sedimentation rates, relative humidity, and
paleotemperatures for the Middle to Early Upper Paleolithic deposits at Divje
Babe I, Slovenia's oldest archaeological site. Its thick archaeological
sequence housed Mousterian artifacts, including hearths, a perforated ursid
femur interpreted as a flute, bone tools, and ∼ 200,000 cave bear
(Ursus spelaeus) skeletal remains (NISP).
Eleven archaeologically significant layers were dated by 42 independent
standard ESR (electron spin resonance) analyses from 26 teeth. To calculate
volumetrically averaged external dose rates, > 190 sedimentary component
samples were analyzed by NAA. Sediment was also analyzed for grain size,
geochemistry, cryoturbation, cataclasis, pre- and post-depositional corrosion,
secondary cementation, and aggregate formation. Local paleotemperature
estimates were verified by comparing with palynological determinations and then
compared to global δ18O and
palynological records.
Initial Excavation and Dating of Ngalue Cave:
A
Middle Stone Age Site along the Niassa Rift
J. Mercader, Y. Asmerom, T. Bennett, M. Raja, and Anne
Skinner, Senior Lecturer
Journal of Human Evolution, 57, 63-74 (2009)
Direct evidence for a systematic occupation of the African tropics during
the early late Pleistocene is lacking. Here, we report a record of human
occupation between 105–42 ka, based on results from a
radiometrically-dated cave section from the Mozambican segment of the Niassa
(Malawi/Nyasa) Rift called Ngalue. The sedimentary sequence from bottom to top
has five units. We concentrate on the so-called ‘‘Middle
Beds,’’ which contain a Middle Stone Age industry characterized by
the use of the discoidal reduction technique. A significant typological feature
is the presence of formal types such as points, scrapers, awls, and microliths.
Special objects consist of grinders/core-axes covered by ochre. Ngalue is one
of the few directly-dated Pleistocene sites located along the biogeographical
corridor for modern human dispersals that links east, central, and southern
Africa, and, with further study, may shed new light on hominin cave habitats
during the late Pleistocene.
ESR Dating Pleistocene Barnacles from BC and
Maine:
A New Method for Tracking Sealevel Change
Bonnie A. B. Blackwell, Jane J.J. Gong, Anne R. Skinner,
Senior Lecturer,
André Blais-Stevens, Robert E. Nelson, Joel I.B.
Blickstein
Health Physics, 98(2), 417-426 (2010)
Barnacles have never been successfully dated by electron spin resonance
(ESR). Living mainly in the intertidal zone, barnacles die when sea level
changes cause their permanent exposure. Thus, dating the barnacles dates past
sea level changes that occur due to ocean volume changes, crustal isostasy, and
tectonics. By modifying the standard ESR method for molluscs to chemically
dissolve 20 μm from off the shells, six barnacle samples from Norridgewock,
Maine, and Khyex River, British Columbia, were tested for suitability for ESR
dating. Due to Mn2 interference peaks, the four Maine barnacle samples were not
datable by ESR. Two barnacles from BC, which lacked Mn2 interference, yielded a
mean ESR age of 15.1 ±1.0 ka. These ages agree well with 14C dates on the
barnacles themselves and wood in the overlying glaciomarine sediment. Although
stability tests to calculate the mean dating signal lifetime and more ESR
calibration tests against other barnacles of known age are needed to ensure the
method’s accuracy, ESR can indeed date Balanus, and thus, sea level
changes.
COMPUTER
SCIENCE
Application Management and Visualization
Jeannie Albrecht, Assistant Professor of Computer Science
and Ryan Braud
Proceedings of the IEEE Conference on Peer-to-Peer Systems (P2P),
September 2009.
Deploying, running, and maintaining applications running on a distributed
set of resources is a challenging task. Distributed application management
systems are designed to automate the process, and to help developers cope with
the problems that arise. In this paper, we highlight the key features of Plush,
an application management system for PlanetLab and ModelNet, and describe how
Plush simplifies peer-to-peer system visualization and evaluation.
PlanetLab – P2P Testing in the Wild
Elliot Jaffe and Jeannie Albrecht, Assistant Professor of
Computer Science
Proceedings of the IEEE Conference on Peer-to-Peer Systems (P2P),
September 2009
PlanetLab is a seasoned Internet testbed for distributed applications
consisting of donated nodes located at more than 350 remote locations spread
across the globe. It is particularly appropriate for peer-to-peer application
research due its large-scale, distributed operation, and the availability of
nodes with edge characteristics. This paper describes the basic testbed offering
and suggests appropriate use-cases.
Managing Distributed Applications Using Gush
Jeannie Albrecht, Assistant Professor of Computer Science
and Danny Yuxing Huang ’10
Proceedings of the Sixth International Conference on Testbeds and
Research Infrastructures for the Development of Networks and Communities,
Testbeds Practices Session (TridentCom), May 2010
Deploying and controlling experiments running on a distributed set of
resources is a challenging task. Software developers often spend a significant
amount of time dealing with the complexities associated with resource
configuration and management in these environments. Experiment control systems
are designed to automate the process, and to ultimately help developers cope
with the common problems that arise during the design, implementation, and
evaluation of distributed systems. However, many of the existing control systems
were designed with specific computing environments in mind, and thus do not
provide support for heterogeneous resources in different testbeds. In this
paper, we explore the functionality of Gush, an experiment control system, and
discuss how it supports execution on three of the four GENI control
frameworks.
Case Studies of Liberal Arts Computer Science
Programs
D. Baldwin, A. Brady, Andrea Danyluk, Professor of Computer
Science J. Adams and A. Lawrence
ACM Transactions on Computing Education, Vol. 10, Issue 1
(2010)
Many undergraduate liberal arts institutions offer computer science majors.
This article illustrates how quality computer science programs can be realized
in a wide variety of liberal arts settings by describing and contrasting the
actual programs at five liberal arts colleges: Williams College, Kalamazoo
College, the State University of New York at Geneseo, Spelman College, and
Calvin College. While the example programs differ in size, mission, and the
nature of their home institutions, all take advantage of their liberal arts
setting to offer rich computer science educations. Comparing these programs to
each other and to the latest ACM/IEEE Computer Society computer science
curriculum shows that the liberal arts programs are distinguishable from the
ACM/Computer Society recommendations but at the same time are strong
undergraduate majors.
Introducing Concurrency in CS 1
Kim Bruce, Andrea Danyluk, Professor of Computer Science and
Thomas Murtagh, Professor of Computer Science
Proceedings of the 41st ACM Technical Symposium on Computer Science
Education, 224-228 (2010)
Because of the growing importance of concurrent programming, many people
are trying to figure out where in the curriculum to introduce students to
concurrency. In this paper we discuss the use of concurrency in an introductory
computer science course. This course, which has been taught for ten years,
introduces concurrency in the context of event-driven programming. It also
makes use of graphics and animations with the support of a library that reduces
the syntactic overhead of using these constructs. Students learn to use
separate threads in a way that enables them to write programs that match their
intuitions of the world. While the separate threads do interact, programs are
selected so that race conditions are generally not an issue.
Deep Transfer as Structure Learning in Markov Logic
Networks
David Moore ’10 and Andrea Danyluk, Professor of
Computer Science
Proceedings of the AAAI -10 Workshop on Statistical Relational
Artificial Intelligence (2010)
Learning the relational structure of a domain is a fundamental problem in
statistical relational learning. The deep transfer algorithm of Davis and
Domingos attempts to improve structure learning in Markov logic networks by
harnessing the power of transfer learning, using the second-order structural
regularities of a source domain to bias the structure search process in a target
domain. We propose that the clique-scoring process which discovers these
second-order regularities constitutes a novel standalone method for learning the
structure of Markov logic networks, and that this fact, rather than the transfer
of structural knowledge across domains, accounts for much of the performance
benefit observed via the deep transfer process. This claim is supported by
experiments in which we find that clique scoring within a single domain often
produces results equaling or surpassing the performance of deep transfer
incorporating external knowledge, and also by explicit algorithmic similarities
between deep transfer and other structure learning techniques.
Adversarial Memory for Detecting
Destructive Races
Stephen Freund, Professor of Computer Science and Cormac
Flanagan
ACM Conference on Programming Language Design and Implementation,
2010
Multithreaded programs are notoriously prone to race conditions, a problem
exacerbated by the widespread adoption of multi-core processors with complex
memory models and cache coherence protocols. Much prior work has focused on
static and dynamic analyses for race detection, but these algorithms typically
are unable to distinguish destructive races that cause erroneous behavior from
benign races that do not. Performing this classification manually is difficult,
timeconsuming, and error prone.
This paper presents a new dynamic analysis technique that uses adversarial
memory to classify race conditions as destructive or benign on systems with
relaxed memory models. Unlike a typical language implementation, which may only
infrequently exhibit non-sequentially consistent behavior, our adversarial
memory implementation exploits the full freedom of the memory model to return
older, unexpected, or stale values for memory reads whenever possible, in an
attempt to crash the target program (that is, to force the program to behave
erroneously). A crashing execution provides concrete evidence of a destructive
bug, and this bug can be strongly correlated with a specific race condition in
the target program.
Experimental results with our Jumble prototype for Java demonstrate that
adversarial memory is highly effective at identifying destructive race
conditions, and in distinguishing them from race conditions that are real but
benign. Adversarial memory can also reveal destructive races that would not be
detected by traditional testing (even after thousands of runs) or by model
checkers that assume sequential consistency
The RoadRunner Dynamic Analysis Framework
for Concurrent Programs
Stephen Freund, Professor of Computer Science and Cormac
Flanagan
ACM Workshop on Program Analysis for Software Tools and Engineering,
2010
RoadRunner is a dynamic analysis framework designed to facilitate rapid
prototyping and experimentation with dynamic analyses for concurrent Java
programs. It provides a clean API for communicating an event stream to back-end
analyses, where each event describes some operation of interest performed by the
target program, such as accessing memory, synchronizing on a lock, forking a new
thread, and so on. This API enables the developer to focus on the essential
algorithmic issues of the dynamic analysis, rather than on orthogonal
infrastructure complexities.
Each back-end analysis tool is expressed as a filter over the event stream,
allowing easy composition of analyses into tool chains. This tool-chain
architecture permits complex analyses to be described and implemented as a
sequence of more simple, modular steps, and it facilitates experimentation with
different tool compositions. Moreover, the ability to insert various monitoring
tools into the tool chain facilitates debugging and performance tuning.
Despite RoadRunner's flexibility, careful implementation and optimization
choices enable RoadRunner-based analyses to offer comparable performance to
traditional, monolithic analysis prototypes, while being up to an order of
magnitude smaller in code size. We have used RoadRunner to develop several dozen
tools and have successfully applied them to programs as large as the Eclipse
programming environment.
The Role of Programming Languages in
Teaching Concurrency
Stephen Freund, Professor of Computer Science, Kim B. Bruce
and Doug Lea
Workshop on Curricula in Concurrency and Parallelism, 2009
In this position paper we restrict attention to a basic prerequisite to
effective integration. The study of programming languages provides a basis for
teaching students how to express concurrent and parallel constructions, how to
understand their semantics, and how to analyze properties of their programs.
More concretely, we present recommendations stemming from recent
SIGPLAN-sponsored activities regarding programming language curriculum
development. As a first, but important step, toward best preparing students for
the increasingly-concurrent landscape, we advocate their recommendation to
include a common core of language and implementation concepts in all Computer
Science and Software Engineering programs.
Approximating Minimum Reset Sequences
Michael Gerbush ’09 and Brent Heeringa, Assistant
Professor of Computer Science
15th International Conference on Implementation and
Application of Automata
We consider the problem of finding minimum reset sequences in
synchronizing automata. The well-known Cerny conjecture states that every
n-state synchronizing automaton has a reset sequence with length at most
(n-1)2. While this conjecture gives an upper bound on the length
of every reset sequence, it does not directly address the problem of finding the
shortest reset sequence. We call this the minimum reset sequence
(MRS) problem. We give an O(mnk+ nk+1)-time
(n-1)/(k-1)-approximation for the MRS problem for any k ≥ 2. We also show
that our analysis is tight. When k=2 our algorithm reduces to Eppstein's
algorithm and yields an (n-1)-approximation. When k=n our algorithm is the
familiar exponential-time, exact algorithm. We define a non-trivial class of
MRS which we call stackcover. We show that stackcover naturally
generalizes two classic optimization problems: min setcover and
shortest common supersequence. Both these problems are known to be hard
to approximate, although at present, setcover has a slightly stronger lower
bound. In particular, it is NP-hard to approximate setcover to within a factor
of c * log n for some c>0. Thus, the MRS problem is as least as hard to
approximate as setcover. This improves the previous best lower bound which
showed that it was NP-hard to approximate the MRS on binary alphabets to within
any constant factor. Our result requires an alphabet of arbitrary size.
Ambient Occlusion Volumes
Morgan McGuire, Assistant Professor of Computer
Science
Proceedings of ACM SIGGRAPH and Eurographics High Performance Graphics,
June 2010
This paper introduces a new approximation algorithm for the near-field
ambient occlusion problem. It combines known pieces in a new way to achieve
substantially improved quality over fast methods and substantially improved
performance compared to accurate methods. Intuitively, it computes the analog of
a shadow volume for ambient light around each polygon, and then applies a
tunable occlusion function within the region it encloses. The algorithm operates
on dynamic triangle meshes and produces output that is comparable to ray traced
occlusion for many scenes. The algorithm's performance on modern GPUs is largely
independent of geometric complexity and is dominated by fill rate, as is the
case with most deferred shading algorithms.
Real-time Stochastic Rasterization on Conventional GPU
Architectures
Morgan McGuire, Assistant Professor of Computer Science, E.
Enderton, P. Shirley, and D. Luebke
Proceedings of ACM SIGGRAPH and Eurographics High Performance Graphics,
June 2010
This paper presents a hybrid algorithm for rendering approximate motion and
defocus blur with precise stochastic visibility evaluation. It
demonstrates---for the first time, with a full stochastic technique---real-time
performance on conventional GPU architectures for complex scenes at 1920x1080 HD
resolution. The algorithm operates on dynamic triangle meshes for which
per-vertex velocity or corresponding vertices from the previous frame are
available. It leverages multisample antialiasing (MSAA) and a tight
space-time-aperture convex hull to efficiently evaluate visibility independently
of shading. For triangles that cross z=0, it fall backs to a 2D bounding
box that we hypothesize but do not prove is conservative. The algorithm further
reduces sample variance within primitives by integrating textures according to
ray differentials in time and aperture.
Hardware-Accelerated Global Illumination by Image Space
Photon Mapping
Morgan McGuire, Assistant Professor of Computer Science and
D. Luebke
ACM SIGGRAPH and EuroGraphics High-Performance Graphics 2009, August
1, 2009
We describe an extension to photon mapping that recasts the most expensive
steps of the algorithm -- the initial and final photon bounces -- as image-space
operations amenable to GPU acceleration. This enables global illumination for
real-time applications as well as accelerating it for offline rendering.
Image Space Photon Mapping (ISPM) rasterizes a light-space bounce
map of emitted photons surviving initial-bounce Russian roulette sampling on
a GPU. It then traces photons conventionally on the CPU. Traditional photon
mapping estimates final radiance by gathering photons from a k-d tree.
ISPM instead scatters indirect illumination by rasterizing an array of photon
volumes. Each volume bounds a filter kernel based on the a priori
probability density of each photon path. These two steps exploit the fact that
initial path segments from point lights and final ones into a pinhole camera
each have a common center of projection. An optional step uses joint bilateral
upsampling of irradiance to reduce the fill requirements of rasterizing photon
volumes. ISPM preserves the accurate and physically-based nature of photon
mapping, supports arbitrary BSDFs, and captures both high- and low-frequency
illumination effects such as caustics and diffuse color interreflection.
An implementation on a consumer GPU and 8-core CPU renders high-quality
global illumination at up to 26 Hz at HD (1920x1080) resolution, for
complex scenes containing moving objects and lights.
OptiX: A General Purpose Ray Tracing Engine
S. G. Parker, J. Bigler, A. Dietrich, H. Friedrich, J.
Hoberock, D. Luebke, D. McAllister,
M. McGuire, Assistant Professor of Computer Science, K.
Morley, A. Robison and M. Stich
ACM Transactions on Graphics (Proceedings of SIGGARPH 2010), July
2010
The OptiX engine is a programmable ray tracing system designed for NVIDIA
GPUs and other highly parallel architectures. OptiX builds on the key
observation that most ray tracing algorithms can be implemented using a small
set of programmable operations. Consequently, the core of OptiX is a
domain-specific just-in-time compiler that generates custom ray tracing kernels
by combining user-supplied programs for ray generation, material shading, object
intersection, and scene traversal. This enables the implementation of a highly
diverse set of ray tracing-based algorithms and applications, including
interactive rendering, offline rendering, collision detection systems,
artificial intelligence queries, and scientific simulations such as sound
propagation. OptiX achieves high performance through a compact object model and
application of several ray tracing-specific compiler optimizations. For ease of
use it exposes a single-ray programming model with full support for recursion
and a dynamic dispatch mechanism similar to virtual function calls.
Nifty Assignments
Nick Parlante, Julie Zelenski, Zachary Dodds, Wynn Vonnegut,
David J. Malan,
Thomas P. Murtagh, Professor of Computer Science, Mark
Sherriff, Daniel Zingaro and Todd W. Neller
Proceedings of the 41st ACM technical symposium on Computer
Science education, 478-479 (2010)
Though we dream of awesome lectures, much of what students actually take
away from a course are due to the assignments. Unfortunately, creating
successful assignments is time consuming and error prone. With that in mind, the
Nifty Assignments session is about promoting and sharing successful assignment
ideas, and more importantly, making the materials available on the web.
This paper includes short descriptions of several assignments that have
been successfully used in the authors' courses. During the session, each
presenter will introduce their assignment, give a quick demo, and describe its
niche in the curriculum and its strengths and weaknesses. The Nifty Assignments
home page,
<http://nifty.stanford.edu
> provides more information for each assignment, with handouts,
data files, starter code, etc.
GEOSCIENCES
Reconstruction of Radiocarbon of Intermediate Water from
the SW Pacific
during the Last Deglaciation
Mea, Cook, Assistant Professor of Geosciences, Jordan P.
Landers ’09, Elizabeth L. Sikes, and Thomas P. Guilderson
EOS Transactions of the AGU, 91 (26), (2010)
During the last deglaciation, increases in atmospheric CO2
coincided with a decrease in atmospheric Δ14C. One explanation
is that this is the result of the ventilation of a long-accumulating
CO2-rich, 14C-poor deep-ocean reservoir. From a sediment
core taken off the coast of Baja California, Marchitto et al. (2007) showed
evidence of a deglacial water mass very depleted in 14C which they
hypothesize may be from reservoir water that partially re-equilibrated in the
Southern Ocean and spread northward at intermediate depth. We present results
from an intermediate-depth core off northern New Zealand. We estimated late
glacial intermediate water Δ14C from radiocarbon measurements of
pairs of benthic and planktonic foraminifera from benthic abundance peaks. Our
preliminary results show no sign of a 14C depleted water mass prior
to 18300 BP and suggest that circulation at intermediate depth during the late
glacial/early deglacial period was consistent with modern circulation.
Glacial/Interglacial Changes in Nutrient Supply and
Stratification in the Western Subarctic
North Pacific Since the
Penultimate Glacial Maximum
Mea Cook, Assistant Professor of Geosciences, and
others
Quaternary Science Reviews, in press, (2010)
In piston cores from the open subarctic Pacific and the Okhotsk Sea,
diatom-bound δ15N (δ15Ndb), biogenic
opal, calcium carbonate, and barium were measured from coretop to the previous
glacial maximum (MIS6). Glacial intervals are generally characterized by high
δ15Ndb (~8‰) and low productivity, whereas
interglacial intervals have a lower δ15Ndb
(5.7–6.3‰) and indicate high biogenic productivity. These data
extend the regional swath of evidence for nearly complete surface nutrient
utilization during glacial maxima, consistent with stronger upper water column
stratification throughout the subarctic region during colder intervals. An early
deglacial decline in δ15Ndb of 2‰ at ~17.5 ka,
previously observed in the Bering Sea, is found here in the open subarctic
Pacific record and arguably also in the Okhotsk, and a case can be made that a
similar decrease in δ15Ndb occurred in both regions
at the previous deglaciation as well. The early deglacial
δ15Ndb decrease, best explained by a decrease in
surface nutrient utilization, appears synchronous with southern
hemisphere-associated deglacial changes and with the Heinrich 1 event in the
North Atlantic. This δ15Ndb decrease may signal the
initial deglacial weakening in subarctic North Pacific stratification and/or a
deglacial increase in shallow subsurface nitrate concentration. If the former,
it would be the North Pacific analogue to the increase in vertical exchange
inferred for the Southern Ocean at the time of Heinrich Event 1. In either case,
the lack of any clear change in paleo-productivity proxies during this interval
would seem to require an early deglacial decrease in the iron-to-nitrate ratio
of subsurface nutrient supply or the predominance of light limitation of
phytoplankton growth during the deglaciation prior to
Bølling-Allerød warming.
Last Glacial Maximum to Holocene Sea Surface Conditions
at Umnak Plateau, Bering Sea
as Inferred from Diatom, Alkenone and
Stable Isotope Records
Mea Cook, Assistant Professor of Geosciences, B.E. Caissie,
J. Brigham-Grette, K.T. Lawrence, and T.D. Herbert
Paleoceanography, 25, PA1206, (2010)
The Bering Sea gateway between the Pacific and Arctic oceans impacts global
climate when glacial- interglacial shifts in shore line position and ice
coverage change regional albedo. Previous work has shown that during the last
glacial termination and into the Holocene, sea level rises and sea ice coverage
diminishes from perennial to absent. Yet, existing work has not quantified sea
ice duration or sea surface temperatures (SST) during this transition. Here we
combine diatom assemblages with the first alkenone record from the Bering Sea to
provide a semiquantitative record of sea ice duration, SST, and productivity
change since the Last Glacial Maximum (LGM). During the LGM, diatom assemblages
indicate that sea ice covered the southeastern Bering Sea perennially. At 15.1
cal ka B.P., the diatom assemblage shifts to one more characteristic of seasonal
sea ice and alkenones occur in the sediments in low concentrations. Deglaciation
is characterized by laminated intervals with highly productive and diverse
diatom assemblages and inferred high coccolithophorid production. At 11.3 cal
ka B.P. the diatom assemblage shifts from one dominated by sea ice species to
one dominated by a warmer water, North Pacific species. Simultaneously, the SST
increases by 3°C and the southeastern Bering Sea becomes ice-free
year-round. Productivity and temperature proxies are positively correlated with
independently dated records from elsewhere in the Bering Sea, the Sea of
Okhotsk, and the North Pacific, indicating that productivity and SST changes are
coeval across the region.
Shakedown in Madagascar: Occurrence of Lavakas (Erosional
Gullies)
Associated with Seismic Activity
Rónadh Cox, Professor of Geosciences, Danielle
Zentner ’09, A.F.M. Rakotondrazafy and C.F. Rasoazanamparany
Geology, 38, 179-182 (2010)
Erosion via lavaka formation is widespread in Madagascar, but controls on
why and where lavakas occur are not understood. GIS analysis reveals a spatial
correlation between lavaka abundance and the frequency of seismic events: most
lavakas occur in or near areas where earthquakes are most frequent. This
correlation explains the unevenness of lavaka distribution in the Malagasy
highlands, and highlights the importance of natural factors in lavaka formation.
Seismic activity appears to pre-condition the landscape to lavaka formation,
although the mechanism by which this happens is not yet known. Recognizing the
connection, however, allows us to pinpoint areas prone to future lavaka
development in zones of active deforestation. Areas with greatest frequency of
seismic events are most at risk for high-density lavaka development.
Hydrodynamic Fractionation of Zircon
Populations
Rónadh Cox, Professor of Geosciences, Rebecca
Lawrence ’07, R.W. Mapes and D.S. Coleman
Geological Society of America Abstracts with Programs, 41,
541 (2009)
Zircons in transport in the Amazon River vary in grain size by an order of
magnitude (30-300 µm equivalent spherical diameter (ESD): coarse silt to
medium sand), and range in age from a few Ma to 2.8 Ga. Age and size are not
independent of one another. There is an overall trend toward decreasing average
grain size with increasing age, but superimposed on this trend are two
average-size maxima, at 100-300 Ma and 1000-1100 Ma. Mesozoic and Cenozoic
grains, for example, have average ESD 122 µm (with standard error 42
µm); whereas grains older than 2000 Ma have average ESD about half that: 67
µm (± 14 µm). A full Wentworth size class (lower fine
sand-upper coarse silt) separates the two average values, meaning that zircons
in these age populations are hydraulically distinct.
Evidence for hydrodynamic fractionation of zircons comes from comparison of
sand size with the sizes of co-transported zircons. Average grain size of host
sands and incorporated zircons are correlated, but the best correlations are
with specific sand-size fractions. Zircon size is positively correlated with
percent medium sand, and inversely correlated with percent very fine sand
(p<0.0001 in both cases). In samples with >50% medium sand, average
zircon size is 100 µm, compared with 80 µm in samples with >50%
very fine sand. This indicates that zircon deposition is not size-blind, and
that zircons are tracking with hydraulically comparable sand grains.
Comparison of five samples taken from a single Amazon River dune reveals
statistically significant differences among age spectra obtained from different
hydrodynamic microenvironments. Samples from a single locality, with identical
provenance, should have zircon age populations statistically indistinguishable
from one another, but these samples show differences at the 2σ level, with
several age populations occurring in only a subset of the samples. We conclude
that hydrodynamic fractionation of zircons and zircon age populations does
occur. Zircon size should therefore be taken into consideration in detrital
zircon provenance analysis.
Possible Impact Origin for Chaos Terrain on Europa:
Evidence from Shape, Size, and Geographic Distribution
Rónadh Cox, Professor of Geosciences and Andrew T.
Mikell ’09
Geological Society of America Abstracts with Programs, 41,
364 (2009)
Chaos areas on Europa—regions where the surface has been disrupted,
with rafts of remnant crust set in a hummocky matrix of slushy
appearance—do not look like classic impacts; but several lines of evidence
suggest they may be sites where impactors fully penetrated the ice crust. In
this model, the matrix represents the boiled and refrozen surface of the
underlying water layer, and the embedded plates are wreckage of the fragmented
crust. The size and shape of chaos areas are significantly correlated. Large
chaos areas tend to have complex, irregular borders whereas smaller ones are
generally equidimensional with simpler boundaries. There is also a correlation
between chaos shape and raft count: the more irregular the chaos shape, the more
rafts it tends to have. These relationships match predictions from impact
experiments into ice over water, which show a relationship between impact energy
and the shape of the resultant hole. The large jagged openings reflect
wide-field fragmentation of the ice crust above the water layer, which is
greater at higher energy (or in thinner crust).
Chaos areas are concentrated at low latitude, consistent with the
expectation for impacts on a tidally locked satellite. Almost 60% of chaos
areas (653 of 1092 features imaged at 230 m/pixel or better) are clustered
between 30° north and south. Likewise, the percent surface area occupied
by chaos terrain is greatest at low latitudes, and decreases with distance from
the equator. The chaos distribution echoes that of non-penetrating craters on
Europa, which also show low-latitude clustering (14/28 craters > 4km diameter
are within 30° of the equator, and only 5/28 are at >60°).
Existing data hint at some apex-antapex asymmetry, but we present this
tentatively because longitudinal coverage in the Galileo data is limited. There
are slightly more chaos areas (both absolutely and per degree longitude) in the
leading hemisphere REGMAP strip than in the corresponding trailing hemisphere
strip; and the area occupied by chaos is 11% in the trailing in comparison with
21% in the leading hemisphere strip. Although strong conclusions cannot be drawn
because of the lack of body-wide coverage, these data are consistent with
predicted impact asymmetry.
Evolution of Soils Derived from Weathering of Saprolitic
Bedrock and Colluvium,
Betasso Catchment, Front Range,
Colorado
David P. Dethier, Professor of Geosciences and Alex E.
Blum
Geological Society of America Abstracts with Programs, 41
(7), 336 (2009)
Field and laboratory analysis of soils formed on saprolitic bedrock and on
midslope colluvium in the Betasso catchment, suggests that granitic regolith
weathers slowly along the dry eastern margin of the Front Range. The
bedrock-derived soil contains primarily smectitic clays and Fe-oxides as
secondary minerals, which extend into the underlying saprolite. At both slope
positions, kaolinite is present only below 40 cm and in trace amounts (<2 wt
%). The mid-slope soil is developed on colluvium, which lies above and below a
buried soil. Bulk carbon from the A horizon of the paleosol gave calibrated ages
of 9000 yr and 8640 yr. Most weathered samples from Betasso have bulk densities
< 2.0 T m-3 and soil horizons give values < 1.75 T
m-3. Bulk chemical changes from bedrock and colluvium to soils are
relatively small, probably mediated by the relatively dry climate and the
transformation of plagioclase and biotite to secondary minerals without major
losses of silica, cations or minor elements. Downprofile changes in bulk
chemistry are subtle, but regolith generally is enriched in constituents such as
C and Zr and slightly depleted in base-metal cations such as Ca and Na with
respect to bedrock. Apatite concentrations in soils and unweathered colluvial
deposits range from 0.10 to 0.27 weight percent. Concentrations of apatite and
plagioclase are lowest, and the smectite concentrations highest in the buried A
and B horizons, which indicate that the paleosol is the most weathered material
sampled. Apatite morphology does not appear to show depth-related trends, but
total phosphorus concentrations increase in the buried soil profile where
apatite concentrations decrease, suggesting that the high P concentrations were
adsorbed on Fe-oxides during weathering of the paleosurface. Comparison to other
chronosequences in the western USA suggests that the buried colluvial soil at
Betasso likely accumulated clay and Fe-oxides for at least 50 to 150 kyr. The
upper saprolite-derived soil represents at least 35 to 60 kyr of clay and
Fe-oxide accumulation.
Flow Characteristics and Geochemistry of Basaltic Lavas
in the Black Gap Area, West Texas: Implications for CO2
Sequestration in Flood Basalt Formations
Lisa A. Gilbert, Assistant Professor of Geosciences and
Williams-Mystic,
M. D’Errico, B. Surpless, and D. Smith
14th Annual CUR Posters on the Hill (2010)
Worldwide flood basalt formations are considered promising targets for
permanent CO2 capture and storage. The evaluation of flood basalts
for extensive geologic sequestration requires focused, small-scale studies to
assess the porosity and permeability characteristics of basalt flows as well as
these flows’ potential to react with and trap CO2 within new,
stable carbonate minerals. More than 14 laterally continuous, well-exposed 22
million year old basalt flows in the Black Gap (BG) volcanic field, east of Big
Bend National Park (BBNP) in west Texas, are ideal for this type of study.
Based on detailed field analysis of vesiculation patterns in the 2 – 6
meter thick BG flows, storage of CO2 would occur in the porous,
highly permeable, upper vesicular zone, which makes up 40-70% of the total flow
thickness. The middle dense zone, with low permeability and porosity, would
function as a cap between flows and limit CO2 movement, allowing time
for mineralization to occur. These distinct vesiculation patterns and other
field evidence were used in my study to conclude that the dominant emplacement
mechanism for these flows was inflation, also common in flood basalt formations.
Another important aspect of my research was geochemical analysis of the BG
flows. Based on collected data, I concluded that BG lavas are geochemically
distinct from older rocks exposed in BBNP, suggesting the flows were derived
from a different mantle source. Further geochemical research on BG basalt flows
could constrain the possible in-situ carbonate mineralization rates in
crystalline silicates, which would permanently stabilize dissolved
CO2.
Controls on Seismic Layering in Superfast Spread Crust:
IODP Hole 1256D
Lisa A. Gilbert, Assistant Professor of Geosciences and
Williams-Mystic
EOS Transactions of AGU Fall Meeting Suppl., 90 (52),
V51D-1728 (2009)
Three main factors have been proposed as controls on seismic layering in
ocean crust: igneous composition, porosity, and metamorphic grade. At IODP Hole
1256D, we compare the geologic and geophysical structure of a complete section
of upper oceanic crust. Drilled into oceanic crust of the Cocos Plate at
07°N x 092°W, Hole 1256D is the first hole drilled continuously
through sediments, basalts, dikes and into the uppermost gabbros of the oceanic
crust (Teagle et al., 2006). The crust at Hole 1256D formed at the East Pacific
Rise 15 Ma at a superfast spreading rate (~22 cm/y full spreading rate), faster
than any modern spreading on Earth. By combining remote, in situ, and laboratory
measurements of seafloor crustal rocks from one site, we take advantage of the
first opportunity to determine controls on seismic layering of normal oceanic
crust through extrusive flows, dikes, and into the gabbros.
Preliminary analysis of shipboard measurements of crustal P-wave velocities
<6.5 km/s, determined that the present base of Hole 1256D (1507 mbsf) is
within Layer 2. Downhole velocity measurements only reach the uppermost gabbro
(1422 mbsf) drilled, and Swift et al. (2008) concluded these deepest downhole
measurements were consistent with Layer 2: steep Vp gradients with depth and
maximum Vp ~6.5 km/s. Regional seismic experiments conducted near Hole 1256D
showed a Layer 2-3 transition at 1450 to 1750 meters below seafloor (mbsf), or
1200 to 1500 meters subbasement (msb). With additional samples and downhole data
from IODP Hole 1256D, we further constrain seismic layers of the ocean crust and
examine the lithologic and physical/chemical controls on seismic layering in
detail.
Previously, the Layer 2-3 boundary has been both drilled and imaged only at
ODP Hole 504B. The deepest hole drilled into oceanic crust to date is still Hole
504B, over 2 km deep, but it only reaches into the dikes. Previous efforts at
ocean drilling have been successful at accessing basalts, dikes, and gabbros,
but not all at a single location within normal oceanic crust. This work compares
igneous composition, alteration, and porosity across major seismic boundaries in
the upper oceanic crust: within Layer 2 and the Layer 2-3 boundary at a single
location, and in the context of previously drilled ODP Hole 504B, which reached
Layer 3 but not gabbros.
A Model for Seamount Formation Based on Observations of a
California Ophiolite
Lisa A. Gilbert, Assistant Professor of Geosciences and
Williams-Mystic, and Susan Schnur
EOS Transactions of AGU Fall Meeting Suppl., 90 (52),
V51D-1746 (2009)
Seamounts are an important feature of the seafloor but relatively little is
known about their internal structure. Previous work has examined seamount
surface morphology via seafloor mapping and submersible observations, broad
geophysical characteristics via remote methods and one-dimensional structure
through drilling. However, to model the formation of seamounts we need a better
understanding of two- and three-dimensional variations, which are difficult to
study in any detail beyond the surface of the seafloor. Ophiolitic seamounts
provide a unique window for studying these difficult to reach features. This
study proposes a model for seamount formation using field observations and
laboratory measurements of a seamount preserved in the subduction-related
Cretaceous Franciscan formation of northern California.
The preserved seamount in our study site likely formed in water deeper than
1.5 km, reached over 1 km high, and may have been formed at or near a ridge
axis, with some similarities to seamounts formed on or near the modern East
Pacific Rise. The relative paucity of faulting and fractured shear zones
throughout the field area indicates little or no rearrangement of units during
emplacement of the seamount onto land. The dominant lava flow morphology is
pillow lavas, which constitute roughly three-quarters of observed flows. Massive
flows and hyaloclastite and pillow fragment (HPF) flows each represent about
one-eighth of the total thickness of flows, and sheet and lobate flows are both
relatively minor. Volcanic facies are further classified based on density,
porosity, igneous and metamorphic petrography, and major and trace element
geochemistry.
Based on variations in flow morphology and related physical properties, we
generalize seamount formation into three phases. A first ‘basal’
stage is dominated by small, tightly packed, lower-porosity pillows, which form
at greater water depths atop existing oceanic crust. A second
‘intermediate’ stage is represented by several individual flow
sequences, with the number and thickness of flow sequences present dependent
upon rate and duration of eruption at a specific seamount. A typical flow
sequence includes massive flows at the base, followed by pillow lavas and capped
by HPF flows. The pillow lavas of this stage are more vesicular than the basal
stage pillows and have a relatively high fraction of inter-pillow hyaloclastite,
in part because they are emplaced above the initial basal flows in shallower
water. Hydrothermal alteration of the larger pillows and field relations imply
the intermediate stage lavas at this site may have been emplaced within a
collapsed caldera structure. Finally, a third ‘cap’ stage occurs on
the upper edifice of a seamount, at the end of an eruptive period. This cap
stage is not dominated by a single morphology, but instead includes significant
variation in flow morphology and the increased presence of sheet and HPF flows.
Our model has important implications for understanding the structure and
formation of seamounts, and provides context for one-dimensional drill holes,
surface observations, and regional-scale geophysical measurements of
seamounts.
Baseline Data on Motivation and Learning Strategies of
Students
in Physical Geology Courses at Multiple
Institutions
Lisa A. Gilbert, Assistant Professor of Geosciences and
Williams-Mystic, and others
Geological Society of America Abstracts with Programs, 41
(7), 603 (2009)
Student adoption of cognitive strategies such as good thinking and
reasoning is either limited or promoted by affective factors such as motivation,
attitudes, feelings and emotions. The GARNET (Geoscience Affective Research
Network) project examines the connection between affective factors and
geoscience learning outcomes. Participating instructors used the Motivated
Strategies for Learning Questionnaire (MSLQ; Pintrich et al., 1993) to
investigate how aspects of the affective domain varied for students and to
identify variations with classroom instruction strategies and learning
environments. Here we report baseline MSLQ data collected from 13 physical
geology classes at six institutions during fall 2008 and spring 2009 semesters.
These are the first data to compare a diverse array of student values, beliefs,
and learning strategies across multiple general education geoscience
courses.
The MSLQ is an 81-item scale divided into six motivation and cognitive
subcategories containing 15 separate subscales. GARNET institutions included
public research universities, a private liberal arts college, and a community
college. We analyzed matched pre and post MSLQ surveys, demographic, and
performance data for 340 students. In any one semester, there are no large
differences in MSLQ pre-instruction scores between different classes (hence
institutions), suggesting that students’ initial motivation and learning
strategies are fairly similar across institutions. There are some significant
differences in MSLQ pre-instruction scores between fall and spring populations,
suggesting somewhat different student attitudes as a function of semester.
Within individual classes, students generally report little change in the study
strategies they adopt (e.g., rehearsal, critical thinking) over the length of
the semester. Various subscale categories have different trends for high and low
performing students. Factors such as self-efficacy, test anxiety, and peer
learning, record significant pre/post changes (p<0.05) in multiple classes
across both semesters. By the end of the semester, most students became less
self-confident, less anxious in test situations, and were more likely to seek
help from peers and instructors.
What Motivations and Learning Strategies Do Students
Bring with Them
to Introductory Geology?
Lisa A. Gilbert, Assistant Professor of Geosciences and
Williams-Mystic, and others
Geological Society of America Abstracts with Programs, 41
(7), 603 (2009)
We examine how demographic characteristics relate to the motivation and
learning strategies of students entering introductory geology. We use the
Motivated Strategies for Learning Questionnaire (MSLQ; Pintrich et al., 1993) to
investigate how motivations and strategies vary among students entering
introductory geology. As a group, students enter these courses with a range of
motivations (e.g., goal orientation and control beliefs) and learning strategies
(e.g., study methods, critical thinking, metacognition). Compared to non-geology
students from a prior study (VanderStoep et al., 1996), entering introductory
geology students are more extrinsically than intrinsically motivated, and have a
lower goal orientation (task value).
Preliminary analyses indicate differences in motivation and strategies by
gender, age, ethnicity, teaching style preference, reason for taking the course,
and experience and interest in science. Students reporting lower interest in
science upon entering introductory geology assign a lower degree of intrinsic
motivation, self-efficacy, elaboration, critical thinking, metacognition, effort
regulation, and task value to their coursework. Students with prior experience
in college science courses report more self-efficacy and control of learning.
Critical thinking correlates with experience. Students who enroll in
introductory geology primarily to fulfill a general education requirement report
lower extrinsic motivation, critical thinking, and task value. Students who
prefer dominantly lecture-style classes (in contrast with active learning
classes) score slightly higher on most MSLQ subscales, indicating higher
motivation and more positive attitudes towards learning introductory geology.
With age, intrinsic motivation increases, extrinsic motivation decreases, and a
preference for peer learning decreases dramatically. Students from
under-represented ethnic groups report lower intrinsic and extrinsic motivation.
Compared to other factors, gender has a small influence on motivation and
attitude among incoming students.
Our results indicate student motivation and learning strategies vary by
demographic, implying that instructors of introductory geology should consider
incoming student affect in creating effective active learning
environments.
The Effect of Instructor and Learning Environment on
Student Motivation and Attitudes
Lisa A. Gilbert, Assistant Professor of Geosciences and
Williams-Mystic, and others
Geological Society of America Abstracts with Programs, (41)
7, 603 (2009)
We observed introductory geology classrooms in an attempt to characterize
student motivations and attitudes in a variety of classes. We assessed classroom
learning environments using the Reformed Teaching Observation Protocol (RTOP;
Sawada et al, 2002), a 25-item instrument that yields a score of 0-100 for each
class. In an effort to assure appropriate rating based on a college lecture
classroom, the observers developed a rubric that led to interrater reliability
of R2=0.95.
Thirteen different physical geology classrooms (community college, public
universities, and private college) had RTOP scores ranging from 19-85. We
identified three representative groups: high (64-68), medium (45-46) and low
(20-24). In a high scoring classroom, lectures are rare, students are actively
engaged, and drive the direction of the course. We characterize the middle
scoring classroom as an active lecture environment (e.g., students are involved
in discussions through clickers). A more traditional lecture format with the
instructor dominating the conversation is characteristic of a low scored
classroom.
We compared RTOP scores with the Motivated Strategies for Learning
Questionnaire (MSLQ; Pintrich et al, 1993). The MSLQ characterizes student
attitudes and motivations about a class. There are clear correlations between
the RTOP and some components from the MSLQ. On average, RTOP scores had a weak
positive correlation with extrinsic goals and use of rehearsal strategies, a
strong negative correlation with control of learning and self-efficacy, and a
strong positive correlation with test anxiety.
Most students perceive that they have less control of their learning, have
less confidence about their ability to be successful and have greater test
anxiety in an active learning physical geology course that requires them to take
more responsibility for their own success. The highest performing students did
not exhibit the same trends. As students become more aware of the factors that
influence their learning, and are challenged intellectually, they may become
more cognizant of their own abilities and limitations. Ultimately, students who
become more aware of their own strengths and weaknesses in the classroom are
more academically successful. This underscores the importance of helping
students develop strong metacognitive skills.
The Effect of Student Motivation and Learning Strategies
on Performance in Physical Geology Courses
Lisa A. Gilbert, Assistant Professor of Geosciences and
Williams-Mystic, and others
Geological Society of America Abstracts with Programs, 41
(7), 604 (2009)
We sought to determine whether students’ attitudes and learning
strategies influence their performance in college-level physical geology
courses. We administered pre- and post-course Motivated Strategies for Learning
Questionnaires (MSLQ, Pintrich et al, 1993) in physical geology classes taught
by 13 instructors at six colleges and universities and compared the results to
final class grade. Preliminary analysis using step-wise multiple regression* of
matched pairs of student responses to the MSLQ (n = 152 for fall 2008; n = 188
for spring 2009) reveals that grades cannot be predicted from pre-course MSLQ
scores, but are significantly (p < 0.01) correlated with several subscales of
the MSLQ administered near the end of the course. The strongest predictor of
final class grade was the student’s score on the self-efficacy subscale.
Students with high self-efficacy are confident that they can understand class
material, do well on assignments and exams, and master the skills being taught
in the course.
Other key subscales of the MSLQ had effects that varied by semester. In
fall 2008, students’ scores on the peer learning (PL) and control of
learning beliefs (CB) subscales each had a negative correlation with their
performance. Students who score high on the peer learning subscale often study
with peers. Those who score high on the control of learning beliefs subscale
feel that their individual study efforts determine their academic performance.
In spring 2009, students’ scores on the intrinsic goal (IG) subscale had a
negative correlation with their performance, but students’ scores on the
task value (TV) subscale had a positive correlation with their performance.
Students who score high on the intrinsic goal subscale strive to thoroughly
understand course content and they prefer course material that challenges them
and arouses their curiosity. Students who score high on the task value subscale
like the course content and find it useful.
Our results suggest that strategies to improve students’
self-efficacy have a strong chance of improving student performance in college
physical geology classes.
Seafloor Volcanic and Hydrothermal Processes Preserved in
the Abitibi Greenstone Belt
of Ontario and Quebec, Canada
Lisa A. Gilbert, Assistant Professor of Geosciences and
Williams-Mystic and N. R. Banerjee
Keck Geology Consortium Symposium, 22, 106 (2009)
More than 60% of the Earth’s surface is composed of oceanic crust.
The formation of new oceanic crust at ridge axes and the reincorporation of old
crust into the mantle or its accretion onto continental margins at subduction
zones are perhaps the most fundamental components of the plate tectonic cycle.
These processes control the physiography of the Earth and the chemical and
thermal evolution of the crust and mantle. Seafloor volcanoes, including
seamounts and mid-ocean ridges, are dynamic environments where globally
significant chemical, biological, and heat fluxes occur between the lithosphere
and hydrosphere. The focus of this project was a study of water-rock-microbial
alteration of the ancient seafloor preserved in the Abitibi Greenstone Belt
(AGB).
Greenstone belts such as the AGB are useful for understanding ancient
seafloor processes that we cannot access directly. When we study modern oceanic
crust it is generally either just on the very surface of the seafloor, in a
one-dimensional hole drilled into the crust, or by some remote method that
prohibits detailed mapping. Greenstone belts are not perfect analogs of normal
ocean crust and are likely preserved because they may be atypical (e.g. Karson,
2002). However, seafloor volcanic rocks are not all generated at normal
mid-ocean ridge settings, and seamounts, back-arc spreading, and oceanic
plateaus represent a significant portion of volcanic rocks found on the
seafloor.
Investigations of seafloor hydrothermal systems are inherently
interdisciplinary, reflecting the complex linkages between geological,
biological, chemical, and physical processes. In particular, the role of
microorganisms in the alteration of oceanic crust has only recently been
demonstrated (see Furnes et al., 2008 for a review). The temperature and depth
limits of oceanic basement microbiological activity have yet to be explored, but
microbial activity occurring in the sub-seafloor biosphere may have a profound
impact on processes and chemical fluxes during water-rock reactions and possibly
hold the key to the development of life on the Earth and other planets.
Borings in Quartzite Surf Boulders from the Upper
Cambrian Basal Deadwood Formation,
Black Hills of South
Dakota
Markes E. Johnson, Professor of Geosciences, M.A. Wilson,
and J.A. Redden
Ichnos, 17, 48-55 (2010)
Shallow, semi-spherical borings occur in clusters with densities of
1-3.5/cm2 in quartzite boulders and in vein quartz from the localized
basal conglomerate of the Cambrian-Ordovician Deadwood Formation in the
east-central Black Hills of South Dakota. Some borings are superimposed on
primary but enigmatic semi-circular structures, 2.5 to 5 cm in diameter, which
are soft-sediment trace fossils formed prior to lithification. The macroborings
are the first to be recorded from quartzite and vein quartz. Host boulders were
eroded from near vertically dipping Paleoproterozoic quartzites from several
different stratigraphic units. The thin boulder conglomerate grades or abruptly
changes to sandstone through a layer ≤ 2 m thick in the Marjuman
transgression (regionally correlated to the Cedarina dakotaensis
trilobite zone). This transgression occurred prior to the start of the globally
recognized Upper Cambrian Paibian Stage. Physically similar rocky-shore
settings are widely known from quartzite islands of Cambrian age in Wisconsin,
Middle Ordovician age on Ontario’s Manitoulin Island, Ordovician-Silurian
age in Manitoba, and Devonian age in Western Australia. Erosion of quartzite
surf boulders of equal or larger size occurred in all those regions, but the
Black Hills of South Dakota is the only region where borings in quartzite are
documented.
Quaternary Intertidal Deposits Intercalated with
Volcanic Rocks on Isla Sombrero Chino
in the Galápagos
Islands, Ecuador
Markes E. Johnson, Professor of Geosciences, Paul Karabinos,
Professor of Geosciences
and V. Mendia
Journal of Coastal Research, 26, 72-768.
A stratigraphic succession composed of limestone intercalated with volcanic
ash and basalt capped by a conglomerate of mixed limestone and basalt cobbles
was deposited in a trough-shaped depression approximately 25 m wide and 50 m
long to a thickness of 1.62 m on the southwest side of Isla Sombrero Chino in
the Galápagos Islands of Ecuador. Two layers of well-cemented
calcarenite up to 20 cm thick accumulated as beach deposits with bioclasts of
gastropods dominated by the Galápagos Periwinkle (Nodilittorina
galapagiensis), a representative of the Beaded Hoofshell (Hipponix
grayanus), broken crab fragments, and bird bones. Crustacean remains most
likely belong to the Sally Lightfoot Crab (Graspus graspus). The bird
bones are attributed to Audubon’s Shearwater (Puffinus
iherminieri). Distinctly intertidal in origin, such a mixed assemblage of
invertebrates and vertebrates is unusual and the association with basalt flows
is seldom met in the rock record. The pristine state of the volcanic cone on
Sombrero Chino is consistent with a 3He exposure age of 13± 0.8 ka. The
age of the basalt-limestone sequence is unknown, but must be younger than the
3He exposure age. The basalt-limestone sequence is elevated approximately 3 m
to 4 m above current sea level. This implies that the intertidal limestone was
deposited during an interval of higher sea level or, more likely, was uplifted
by magmatic inflation. Such intertidal deposits, in conjunction with more
precise dating, have the potential to constrain the history of relative
sea-level change during island growth and isostatic subsidence related to
volcanism and lithospheric cooling. Intertidal deposits of the kind reported
here, also help to distinguish between monogenetic as opposed to polygenetic
history for volcanic islands.
Enhancing 3-D Visualization of Structural Geology
Concepts
Paul Karabinos, Professor of Geosciences
Geological Society of America Abstracts with Programs, 41,
197 (2009)
Structural geology students solve numerous 3-D spatial problems using 2-D
projections, yet many have difficulty visualizing how the projections work.
Thus, they may learn how to solve sample problems by a given set of operations,
but they have difficulty extrapolating to novel problems. Well-drawn perspective
diagrams can help some students understand how projections work, for example the
stereographic projection, but their value is limited by their static
nature.
3-D visualization can be dramatically enhanced with the aid of computer
applications, which make it possible to construct accurate 3-D models that can
be manipulated by the user to show how projections work. Google SketchUp, a free
application, can be used to create a realistic model of the stereographic
projection. The lower hemisphere, projection point, projection plane (complete
with great and small circles), and planes or lines representing geologic
features (all referenced to the center of the sphere) can be constructed
accurately enough to measure angles within the 3-D model. The ability to rotate
the model, and all of its elements, allows students to connect the 3-D spatial
relationships with the 2-D stereographic projection. Once the model is rotated,
it is easy for students to retain the 3-D perspective.
The three-point problem is a classic example of a 3-D problem that is
solved by projecting ‘on the fly’ into a plane. ArcMap and ArcScene,
commercially available GIS software, can be combined to produce precise 2-D and
3-D illustrations of the solution to this problem. Turning layers on
sequentially permits an orderly demonstration of how the three initial points
can be used to construct the plane and structure contours of a geologic contact.
Rotation of the 3-D model shows how the intersections of the topographic and
structure contours define the outcrop pattern, and how to determine the depth of
a contact below the surface. These programs can also be used to show how to
measure displacement across a fault and how to construct cross-sections from map
data.
At present these programs can help show students how to use 2-D projections
and geometric techniques to solve 3-D problems. With modification, however, the
next generation of programs could actually solve such problems in a 3-D
perspective without 2-D projections.
Evidence for Kilometer-Scale Fluid-Controlled
Redistribution of Graphite in Peletic Schist
in the Taconic Thrust
Belt on Mount Greylock, Massachusetts
Paul Karabinos, Professor of Geosciences, Ruth F. Arnoff
’09 and Eliza Nemser ’98
Geological Society of America Abstracts with Programs, 42, 97
(2010)
During the Ordovician Taconic orogeny, Neoproterozoic to Ordovician shale
and siltstone from the Laurentian slope and rise were thrust over coeval shelf
rocks. Flysch, eroded from the advancing thrust sheets, blanketed the shelf
rocks, and was locally overridden by the thrust sheets. On Mount Greylock in MA
graphitic and non-graphitic schist are indistinguishable except for their C
content. The graphite-rich schist was mapped as flysch of the Walloomsac Fm
(WF), and the contact that separates WF, from graphite-poor rocks of the
Greylock Sch (GS) was interpreted as a thrust. This assumed that the present
distribution of graphite reflects the primary preservation of organic material
in sediments. Based on our mapping, the gradational contact between WF and GS is
difficult to locate and we found no strain gradients near it. In contrast, a
wide zone within the WF especially near the contact with the structurally lower
marble is a zone of intense deformation and shares many characteristics of
mélange.
The distinction between graphite-rich and
graphite-poor rocks is not a reliable stratigraphic tool. Typically
graphite-rich (1-2 wt %) and graphite-poor (0.1-0.2 wt%) rocks are interlayered
on scales ranging from 10- to 100-m. Examination of 200 thin sections reveals
that 55% of WF samples are graphite-rich, but 45% are graphite-poor. Although
only 10% of the GS samples are graphite-rich, 55% contain plagioclase grains
with abundant graphite inclusions surrounded by a graphite-poor matrix,
indicating that graphite was removed from the matrix by aqueous fluids.
We suggest that C was widely and more evenly distributed throughout the
pelitic schist, and that large-scale redistribution of graphite occurred during
metamorphism and thrusting. Aqueous fluids dissolved C in large volumes of
rock, which are now graphite-poor, but commonly contain plagioclase with
abundant graphite inclusions. Faults focused fluid flow and retrograde
reactions, commonly observed in fault zone rocks, consumed water and decreased C
solubility. C precipitated from these fluids to produce graphite-rich schist.
Graphite weakened fault zone rocks and created a positive feedback between
faulting, fluid flow, and graphite precipitation. We map the main thrust between
graphite-rich schist and marble and include all of the pelites in the
hanging-wall.
Visualizing Strain with the Rf-F Method Using an
Interactive Computer Program
Paul Karabinos, Professor of Geosciences and Chris
Warren
Journal of Structural Geology, 32, 86-92 (2010)
The Rf–Φ method is a powerful graphical approach for
estimating finite strain of deformed elliptical objects, but one that students
commonly find difficult to understand. We developed a program that allows users
to explore visually how deforming a set of elliptical objects appears on
Rf–Φ plots. A user creates or loads the ellipses and then
deforms them by simple shear, pure shear, or rigid rotation. As the ratio of the
long to short axis of the ellipses (Rf) and long-axis orientations
(Φ) change in one window, the Rf–Φ plot continuously
and instantaneously updates in another. Users can save snapshots of the deformed
elliptical objects and the Rf–Φ plots to record graphical
experiments. The program provides both Rf vs. Φ and polar
ln(Rf) vs. 2(Φ) plots. The user can ‘undeform’
ellipses quickly and easily, making it possible to inspect the
‘original’ shapes and orientations of objects, and to evaluate the
plausibility of the determined strain values. Users can export information about
the pebbles' shape and orientation to spreadsheets for rigorous statistical
analysis. This program is written in Java and so can run on virtually any
operating system. Both the source code and the application will be freely
available for academic purposes.
Evidence for an Orogen-Parallel, Normal-Sense Shear Zone
around the Chester Dome, Vermont: A Possible Template for Gneiss Dome Formation
in the New England Appalachians
Paul Karabinos, Professor of Geosciences, E.S. Mygatt
’03, S. M. Cook ’88, and M. Student ’01
Geological Society of America Memoir 206, 1-29 (2010)
The New England Appalachians contain two north-south trending sets of
gneiss domes. The western belt, which includes the Chester dome, contains
thirteen domes that expose either 1 Ga Laurentian basement rocks or
approximately 475 Ma rocks of the Shelburne Falls arc. The eastern belt
contains twenty-one gneiss domes cored by either 600 Ma crust of possible
Gondwanan affinity or approximately 450 Ma rocks of the Bronson Hill arc. Domes
in both belts are surrounded by Silurian and Early Devonian metasedimentary
rocks, which were deposited in two north-south trending basins before the
Acadian orogeny. The Chester dome in southeastern Vermont, the main focus of
this study, is an intensively studied, classic example of a mantled gneiss dome.
Lower Paleozoic units around the Chester dome are dramatically thinner than they
are elsewhere in southern Vermont, and are locally absent. A strong spatial
correlation between the highly attenuated mantling units and highly strained,
mylonitic rocks suggests the presence of a ductile, normal-sense shear zone.
Garnet-bearing rocks in the core of the dome record metamorphism during
decompression of 2 to 3 kbar, whereas rocks above the high-strain zone were
metamorphosed during nearly isobaric conditions. Strain markers and kinematic
indicators suggest that extension occurred during northward extrusion of lower
to middle crustal wedges of Proterozoic and Ordovician quartz-feldspar-rich
gneisses below and up into a thick tectonic cover of Silurian mica-rich
metasediments that had been transported westward in large-scale nappes. If the
ductile, normal-sense shear zone is responsible for syn-metamorphic
decompression, as we propose, extrusion occurred at approximately 380 Ma.
From Rodinia to Pangea: The Lithotectonic Record of the
Appalachian Region
Paul Karabinos, Professor of Geosciences, R.P. Tollo, M.J.
Bartholomew, and J.P. Hibbard
Geological Society of America Memoir 206, 786 p. (2010)
This Geological Society of America Memoir contains thirty-six new articles
tracing the geologic history of eastern North America from 1200 to 200 million
years ago. It includes summary articles written by leading experts, which
describe major tectonic events in the eastern North America, and specialized
research papers describing recent progress in reconstructing Paleozoic mountain
building processes in the Appalachians.
The Outcomes of the Life of a Geologist: The
Autobiography of T. Nelson Dale (1845-1937)
R. A. Wobus, Professor of Geosciences
Connecticut Academy of Arts and Sciences Transactions Series, 173 p.
(2009)
T. Nelson Dale is known not so much as a teacher but as one of the most
important and productive early contributors to the geological study of the
Northeast. With virtually no formal education beyond preparatory school (though
he yearned to attend Yale), he embarked on a professional career that spanned
seven decades, four of them with the U.S. Geological Survey, resulting in 60
geological publications (including 20 U.S.G.S. bulletins and 7 papers in the
American Journal of Science). He became an expert—perhaps the expert of
his time—on the geology of western New England and on the commercial
building stones (granite, slate, marble) of the United States. His field
studies in New England involved over 12,000 miles of hiking and more of driving
the primitive roads; according to one of his grandsons, he was “a
tiny...dynamo of a man who walked as far and fast as he was able for as long as
he lived.”
The record of his early schooling by remarkable tutors, his travels in
Europe, and the fortunate acquaintances who helped him on his unconventional
career path is followed by a chronological narrative of his geological field
work in Europe and the U.S. The account of his explorations is accompanied by
wonderful vignettes of the places he visited and the people he met. A
philosophical man of firm religious convictions, his stories are laced with
Biblical references and character judgments from which not even the most casual
acquaintances are immune. In his later years he wrote and spoke widely about
philosophy, education, and religion.
Twenty-Two Years of Undergraduate Research in the
Geosciences – The Keck Experience
R. A. Wobus, Professor of Geosciences, A. DeWet, C. Manduca
’80, and L. Bettison-Varga
Geological Society of America Special Paper 461, 163-172
(2009)
The Keck Geology Consortium is an 18-college collaboration focused on
enriching undergraduate education through development of high-quality geoscience
research experiences for undergraduate students and faculty participants. The
consortium projects are year-long research experiences that extend from summer
project design and fieldwork through collection of laboratory data and analysis
during the academic year, to the culminating presentation of research results at
the annual spring symposium. The Keck experience incorporates all the
characteristics of high-quality undergraduate research. Students are involved
in original research, are stakeholders and retain intellectual ownership of
their research, experience the excitement of working in group and independent
contexts, discuss and publish their findings, and engage in the scientific
process from conception to completion. Since 1987, 1094 students (1175 slots,
81 repeats) and over 121 faculty (410 slots, multiple repeats) have participated
in 137 projects, providing a substantial data set for studying the impact of
undergraduate research and field experiences on geoscience students. Over 56%
of the students have been women, and since 1996, 34% of the project faculty have
been women. There are now 45 Keck alumni in academic teaching and research
positions, a matriculation rate three times the average of U.S. geoscience
undergraduates. Twenty-two of these new faculty are women, indicating
remarkable success in attracting women to and retaining women in academic
geoscience careers.
Documenting the Timescale of Pluton Construction: U-Pb
Geochronology
of the Subvolcanic Vinalhaven Intrusion and Associated
Volcanic Rocks
R. A. Wobus, Professor of Geosciences, D.P. Hawkins, N.E.
Brown,
R.A. Wiebe, and E.W. Klemetti ’99
Geological Society of America Abstracts with Programs 41 (7),
140 (2009)
The timescale of pluton construction and the relationship between granite
plutons and rhyolitic volcanic complexes remain elusive problems that limit our
understanding of crustal magmatism. High-precision geochronology of plutons
associated with related volcanic rocks, such as the Silurian Vinalhaven
Intrusive Complex (VHIC) on the coast of Maine, provide an ideal opportunity to
constrain the timescale of pluton construction and establish the temporal
relationship needed to examine the connection between subvolcanic rocks and
associated volcanic rocks.
The VHIC is an excellent locality for such a study because the
spatially-associated volcanic rocks, including flow-banded rhyolite domes
mantled by block-and-ash-flow breccia deposits, almost certainly erupted from
the intrusion. However, high-precision CA-TIMS U-Pb zircon ages of five granite
samples from the intrusion indicate that the exposed level of the intrusion was
constructed over at least 0.7 m.y. (419.9 ± 0.1 Ma to 419.2 ± 0.1 Ma),
and two rhyolite tuff samples from the volcanic section indicate that the
preserved volcanic rocks are 1.3 m.y. older than the intrusion. We believe this
gap in time reflects both the poor preservation of the volcanic sequence and the
level of exposure of the intrusion, a conclusion supported by a recent
microgravity study of the VHIC, rather than a lack of consanguinity.
Two additional lines of evidence support this conclusion. First, granite
samples from the intrusion yielded magmatic zircon crystals, with U-Pb ages that
span the gap in time between the VHIC and the volcanic rocks. Given field
evidence for rejuvenation of crystal mush during magma replenishment, these
older zircon crystals represent antecrysts. Second, the Vinalhaven diabase
(VHD), which occurs beneath the oldest block-and-ash flow breccias, contains
ubiquitous quartz xenocrysts and rare xenoliths of granite. Together with whole
rock major and trace element compositions, these observations suggest that the
VHD formed by bulk mixing of gabbro and granite early in the history of the
VHIC. Zircon xenocrysts in the VHD are similar in habit, size, color, and zoning
characteristics to zircons from the VHIC granite, were almost certainly derived
from the VHIC, and must be 421.2 Ma or older. Thus, we believe the pluton was
constructed over about 2 m.y.
MATHEMATICS
AND STATISTICS
Riot at the Calc Exam and Other
Mathematically Bent Stories
Colin Adams, Thomas T. Read Professor of Mathematics
American Mathematical Society (July 2009)
A compendium of humorous stories related to math, including “A
Difficult Delivery”, about how similar creating a new proof is to giving
birth, and “The Integral: A Horror Story”.
The Projection Stick Index of Knots
Colin Adams, Thomas T. Read Professor of Mathematics with T.
Shayler
Journal of Knot Theory and Its Ramifications, 18,
Issue 7, 889-899 (2009)
The stick index of a knot K is defined to be the least number of line
segments needed to construct a polygonal embedding of K. We define the
projection stick index of K to be the least number of line segments in any
projection of a polygonal embedding of K. In this paper, we establish bounds on
the projection stick index for various torus knots. We then show that the stick
index of a (p,2p+1)-torus knot is 4p, and the projection stick index is 2p+1.
This provides examples of knots such that the projection stick index is one
greater than half the stick index. We show that for all other torus knots for
which the stick index is known, the projection stick index is larger than this.
We conjecture that a projection stick index of half the stick index is
unattainable for any knot.
The Lord of the Rings Part I: The NSF Fellowship of the
Rings
Colin Adams, Thomas T. Read Professor of Mathematics
Mathematical Intelligencer, 31, No. 2, 51-53
(2009)
Wherein Frodo Baggins must destroy the ring, a noncommutative ring with
unity, by throwing it into the math building at Purdue, as they have no interest
in noncommutative rings.
Happiness is a Warm Theorem
Colin Adams, Thomas T. Read Professor of Mathematics
Mathematical Intelligencer, 31, No. 3, 10-12 (2009)
Why is the pursuit of mathematics so like a manic depressive
psychosis?
The Adventures of Robin Caruso
Colin Adams, Thomas T. Read Professor of Mathematics
Mathematical Intelligencer, 31, No. 4, 15-17
(2009)
If you were a mathematician marooned on an island, would it be disaster or
nirvana?
Job Solicitation
Colin Adams, Thomas T. Read Professor of Mathematics
Mathematical Intelligencer, 32, No. 1, 16-17 (2009)
Looking for a job teaching mathematics? Have we got a school for
you.
Immortality
Colin Adams, Thomas T. Read Professor of Mathematics
Mathematical Intelligencer, 32, No. 2, 19-20 (2010)
If you were immortal, how would you entertain yourself over the
centuries?
The Heart of Mathematics: An Invitation to Effective
Thinking
Edward B. Burger, Professor of Mathematics with Michael
Starbird
John Wiley & Sons, 3rd Edition (2010)
Coincidences, Chaos and All That Math Jazz
Edward B. Burger, Professor of Mathematics with Michael
Starbird
Seung San Publishers, Korea, Korean Language Edition, pp. 352
(2009)
Business Statistics: A First Course
Richard De Veaux, Professor of Statistics with Norean Sharpe
and Paul Velleman
Pearson, Inc., Boston (2011)
Multivariate Additive Partial Least-Squares
Splines
Richard De Veaux, Professor of Statistics with R. Lombardo
and J.F. Durand
Journal of Chemometrics, News, 23, Issue 12, 605-617
(2009)
A Thermodynamic Classification of Real Numbers
Thomas Garrity, William R. Kenan Jr. Professor of
Mathematics
Journal of Number Theory, 130, No. 7, 1537-1559 (July 2010)
A new classification scheme for real numbers is given, motivated by ideas
from statistical mechanics in general and work of Knauf (1993)
[16]
and Fiala and Kleban (2005)
[8]
in particular. Critical for this classification of a real number will be
the Diophantine properties of its continued fraction expansion.
Cyclic Approximation to Stasis
Stewart Johnson, Professor of Mathematics with Jordan Rodu
‘05
Electronic Journal of Differential Equations, 80, 1-16,
(2009)
Theoretical Considerations of Fisher’s Theorem in
Small At-Risk Population
Stewart Johnson, Professor of Mathematics
Bulletin of Mathematical Biology, 72, Issue 1, 221-229,
(2010)
Baserunner’s Optimal Path
Stewart Johnson and Frank Morgan, Professors of Mathematics,
with Davide Carozza
Math Intelligencer, 32, 10-15, (2010)
We compute the fastest path around the bases, assuming a bound on the
magnitude of the acceleration.
Simultaneous Confidence Bounds for Relative Risks in
Multiple Comparisons to Control
Bernhard Klingenberg, Associate Professor of Statistics
Statistics in Medicine (2010)
Motivated by a recent flu-vaccine study, the construction of simultaneous
upper confidence limits that jointly bound relative risks formed by comparing
several treatments to a control are discussed.
Proof of Concept and Dose Elimination with Binary
Responses under Model Uncertainty
Bernhard Klingenberg, Associate Professor of Statistics
Statistics in Medicine, 28, 274-292 (2009)
This paper develops a unified framework for testing Proof of Concept in
phase II or similar clinical trials and how to estimate a target dose under
model uncertainty.
A Class of Local Noetherian Domains
Susan Loepp, Professor of Mathematics with C. Rotthaus and
S. Sword
Journal of Commutative Algebra, 4, 647-678 (2009)
In this paper, we construct unique factorization domains with a given
specific completion. The ideas in the paper are based on previous work of
Nishimura, Rotthaus, Ogama, and Heitmann.
Dimensions of Formal Fibers of Height One Prime
Ideals
Susan Loepp, Professor of Mathematics with A. Boocher and M.
Daub
Communications in Algebra, 1, 233-253 (2010)
The relationship between a local ring and its completion has long been
known to be important and mysterious. One way of studying this relationship is
to examine the dimensions of the formal fiber rings. In this paper, we answer
the question; if A is an excellent local integral domain such that the dimension
of the formal fiber ring of (0) is positive, then must the set of height-one
prime ideals p such that the dimension of the formal fiber ring at (0) is the
same as the dimension of the formal fiber ring at p be finite? Since excellent
rings behave well, the intuition is that the answer to this question should be
yes. We show, however, that there do exist excellent local integral domains
such that the set described above is infinite, confirming that in many ways,
excellent local rings do not behave as nicely has one might hope.
The Effect of Convolving Families of L-Functions on the
Underlying Group Symmetries
Steven J. Miller, Assistant Professor of Mathematics with
Eduardo Duenez
Proceedings of the London Mathematical Society (2009)
The Katz-Sarnak conjectures state that the behavior of zeros in families of
L-functions are well-modeled by the eigenvalues of a classical compact group.
We show we may attach a symmetry constant to many families of L-functions that
identifies the corresponding classical compact group. Further, we identify a
nice group structure, namely the symmetry constant of the Rankin-Selberg
convolution of two families is the product of the symmetry constants. This
allows us to predict the associated classical compact group in many new cases,
which we then prove.
Data Diagnostics Using Second Order Tests of
Benford’s Law
Steven J. Miller, Assistant Professor of Mathematics with
Mark Nigrini
Auditing: A Journal of Practice and Theory, 28, 2, 305-324
(2009)
Auditors are required to use analytical procedures to identify the
existence of unusual transactions, events, and trends. Benford's Law gives the
expected patterns of the digits in numerical data, and has been advocated as a
test for the authenticity and reliability of transaction level accounting data.
This paper describes a new second-order test that calculates the digit
frequencies of the differences between the ordered ranked) values in a data set.
These digit frequencies approximate the frequencies of Benford's Law for most
data sets. In the study the second-order test is applied to four sets of
transactional data. The second-order test detected errors in data downloads,
rounded data, data generated by statistical procedures, and the inaccurate
ordering of data. The test can be applied to any data set and nonconformity
usually signals some issue related to data integrity that might not have been
easily detectable using traditional analytical procedures.
When Almost All Sets Are Difference Dominated
Steven J. Miller, Assistant Professor of Mathematics with
Peter Hegarty
Random Structures and Algorithms, 35, 1, 118-136 (2009)
We investigate the relationship between the sizes of the sum and difference
sets attached to a subset of {0,1,...,N}, chosen randomly according to a
binomial model with parameter p(N), with N^{-1} = o(p(N)). We show that the
random subset is almost surely difference dominated, as N --> ∞ for any
choice of p(N) tending to zero, thus confirming a conjecture of Martin and
O'Bryant. Furthermore, we exhibit a threshold phenomenon regarding the ratio of
the size of the difference to the sumset. We also extend our results to the
comparison of the generalized difference sets attached to an arbitrary pair of
binary linear forms. The heart of our approach involves using different tools to
obtain strong concentration of the sizes of the sum and difference sets about
their mean values, for various ranges of the parameter p.
Nuclei, Primes and the Random Matrix Connection
Steven J. Miller, Assistant Professor of Mathematics with
Frank W.K. Firk
Symmetry, 1, 64-105 (2009)
In this article, we discuss the remarkable connection between two very
different fields, number theory and nuclear physics. We describe the essential
aspects of these fields, the quantities studied, and how insights in one have
been fruitfully applied in the other. The exciting branch of modern mathematics
– random matrix theory – provides the connection between the two
fields. We assume no detailed knowledge of number theory, nuclear physics, or
random matrix theory; all that is required is some familiarity with linear
algebra and probability theory, as well as some results from complex analysis.
Our goal is to provide the inquisitive reader with a sound overview of the
subjects, placing them in their historical context in a way that is not
traditionally given in the popular and technical surveys.
Silver Scheduler: A Demand-Driven Modeling Approach for
the Construction
of Micro-Schedules of Movies in a
Multiplex
Steven J. Miller, Assistant Professor of Mathematics with
Jehoshua Eliashberg,
B. Weinberg and Berend Wierenga
International J. of Research in Marketing (2009)
This paper describes a model that generates weekly movie schedules in a
multiplex movie theater. A movie schedule specifies within each day of the week,
on which screen(s) different movies will be played, and at which time(s). The
model consists of two parts: (i) conditional forecasts of the number of visitors
per show for any possible starting time; and (ii) an optimization procedure that
quickly finds an almost optimal schedule (which can be demonstrated to be close
to the optimal schedule). To generate this schedule we formulate the so-called
movie scheduling problem as a generalized set partitioning problem. The latter
is solved with an algorithm based on column generation techniques. We have
applied this combined demand forecasting /schedule optimization procedure to a
multiplex in Amsterdam where we supported the scheduling of fourteen movie
weeks. The proposed model not only makes movie scheduling easier and less time
consuming, but also generates schedules that would attract more visitors than
the current ‘intuition-based’ schedules.
Explicit Constructions of Infinite Families of MSTD
Sets
Steven J. Miller, Assistant Professor of Mathematics with
Brooke Orosz and Dan Scheinerman
Journal of Number Theory, 130, 1221-1233 (2010)
We explicitly construct infinite families of MSTD (more sums than
differences) sets. There are enough of these sets to prove that there exists a
constant C such that at least C/r4 of the
2r subsets of {1, ..., r} are MSTD sets; thus
our family is significantly denser than previous constructions (whose densities
are at most f(r)=2r=2 for some polynomial
f(r)). We conclude by generalizing our method to compare linear
forms.
An Orthogonal Test of the L-Functions Ratios
Conjecture
Steven J. Miller, Assistant Professor of Mathematics
London Mathematical Society (2009)
We test the predictions of (a weakened version of) the L-functions
Ratios Conjecture for the family of cuspidal newforms of weight k and
level N, with either k fixed and N tending to infinity
through the primes or N = 1 and k tending to infinity. We
study the main and lower order terms in the 1-level density. We provide evidence
for the Ratios Conjecture by computing and confirming its predictions up to a
power savings in the family’s cardinality, at least for test functions
whose Fourier transforms are supported in (-2; 2). We do this both
for the weighted and unweighted 1-level density (where in the weighted case we
use the Petersson weights), thus showing that either formulation may be used.
These two 1-level densities differ by a term of size 1= log(k
2n). Finally, we show that there is another way of extending
the sums arising in the Ratios Conjecture, leading to a different answer
(although the answer is such a lower order term that it is hopeless to observe
which is correct).
Stochastic Calculus and the Nobel Prize Winning
Black-Scholes Equation
Frank Morgan, Webster Atwell Class of 1921 Professor of
Mathematics
Math Horizons, 16-18 (November 2009)
The celebrated Black-Scholes partial differential equation for financial
derivatives stands as a revolutionary application of stochastic or random
calculus. Based on a short talk at a special “Stochastic Fantastic
Day,” which my chair Tom Garrity organized to give his colleagues a chance
to explore a compelling but unfamiliar topic and enjoy dinner at his home
afterwards.
Fermat’s Last Theorem for Fractional and Irrational
Components
Frank Morgan, Webster Atwell Class of 1921 Professor of
Mathematics
College Math. J.,41, 182-185 (2010)
Fermat’s Last Theorem says that for positive integers n > 2, x, y,
z, there are no solutions to xn + yn = zn.
What about rational exponents n? Irrational n? Negative n? See what an
undergraduate senior seminar discovered.
Baserunner’s Optimal Path
Frank Morgan, Webster Atwell Class of 1921 Professor of
Mathematics
Collegiate Baseball Newspaper (2010)
An expository account of our paper about the fastest path around the bases,
assuming a bound on the magnitude of the acceleration.
Function Fields with 3-Rank at Least 2
Allison Pacelli, Associate Professor of Mathematics
Acta Arith., 139, 101-110 (2009)
In a 2007 Acta Arithmetica paper, a construction was given for an infinite
parameterized family of quadratic fields with 3-rank at least 2. This paper
gives a function field analogue.
Indivisibility of Class Numbers of Global Function
Fields
Allison Pacelli, Associate Professor of Mathematics with
Michael Rosen
Acta Arith., 138, 269-287 (2009)
Much is known about the divisibility of class numbers of both number fields
and function fields, but the question of indivisibility of class numbers has
proven far more intractable. Almost all previous results are for quadratic
fields only, and the fields desired are not constructed explicitly. In this
paper, we construct infinitely many function fields of degree m (3 not dividing
m) over the rational function field with class number indivisible by 3.
On Ergodic Transformations That Are Both Weakly Mixing
and Uniformly Rigid
Cesar Silva, Hagey Family Professor of Mathematics with
Jennifer James ‘07,
Thomas Koberda, Kathryn Lindsey ‘07, and
Peter Speh
New York Journal of Mathematics, 15, 393-403 (2009)
We examine some of the properties of uniformly rigid transformations, and
analyze the compatibility of uniform rigidity and (measurable) weak mixing along
with some of their asymptotic convergence properties. We show that on Cantor
space, there does not exist a finite measure-preserving, totally ergodic,
uniformly rigid transformation. We briefly discuss general group actions and
show that (measurable) weak mixing and uniform rigidity can coexist in a more
general setting.
Ergodic Properties of a Class of Discrete Abelian Group
Extensions
of Rank-One Transformations
Cesar Silva, Hagey Family Professor of Mathematics with C.
Dodd, P. Jeasakul ‘05,
P. Jirapattanakul ‘05, D. Kane, B.
Robinson, and N. Stein
Colloquium Mathematicum, 119, No. 4, 1-22 (2010)
We define a class of discrete Abelian group extensions of rank-one
transformations and establish necessary and sufficient conditions for these
extensions to be power weakly mixing. We show that all members of this class
are multiply recurrent. We then study conditions sufficient for showing that
Cartesian products of transformations are conservative for a class of invertible
infinite measure-preserving transformations and provide examples of these
transformations.
PHYSICS
A Two Length Scale Polymer Theory for RNA Loop Free
Energies and Helix Stacking
Daniel P. Aalberts and Nagarajan Nandagopal ’09
RNA, in press (doi: 10.1261/ma. 1831710) (2010)
The reliability of RNA secondary structure predictions is subject to the
accuracy of the underlying free energy model. Mfold and other RNA folding
algorithms are based on the Turner model, whose weakest part is its formulation
of loop free energies, particularly for multibranch loops. RNA loops contain
single-strand and helix-crossing segments, so we develop an enhanced two-length
freely jointed chain theory and revise it for self-avoidance. Our resulting
universal formula for RNA loop entropy has fewer parameters than the
Turner/Mfold model, and yet simulations show that the standard errors for
multibranch loop free energies are reduced by an order of magnitude. We further
note that coaxial stacking decreases the effective length of multibranch loops
and provides, surprisingly, an entropic stabilization of the ordered
configuration in addition to the enthalpic contribution of helix stacking. Our
formula is in good agreement with measured hairpin free energies. We find that
it also improves the accuracy of folding predictions.
A Vision for Ultrafast Photoisomerization
Daniel P. Aalberts and Hans F. Stabenau ’02
Physica A, in press (doi:10.1016/j.physa.2010.02.016) (2010)
We propose a simple physical mechanism to explain the ultrafast first step
of vision, a photoinduced cis to trans rotation of retinal. In
the ground state, the torsional stability of π bonds is countered by
Coulomb interactions acting between the π lobes; the torsional dependence
for Coulomb interactions is absent in the often-used Ohno approximation, but
restored with our formula. After photoexcitation, the bonding weakens causing
the destabilizing terms to dominate. The twist in the ground state due to
steric interactions surrounding the 11-cis bond increases the initial
torque and thus the speed of the reaction.
Low-Noise Amplification of a Continuous-Variable Quantum
State
Prof. Kevin Jones and others
Physical Review Letters, 103, 010501 (2009)
We present an experimental realization of a low-noise, phase-insensitive
optical amplifier using a four- wave mixing interaction in hot Rb vapor.
Performance near the quantum limit for a range of amplifier gains, including
near unity, can be achieved. Such low-noise amplifiers are essential for
so-called quantum cloning machines and are useful in quantum information
protocols. We demonstrate that amplification and
‘‘cloning’’ of one half of a two-mode squeezed state is
possible while preserving entanglement.
Quantum Correlated Light Beams from Non-Degenerate
Four-Wave Mixing in an Atomic Vapor:
The D1 Aand D2 Lines of
85Rb and 87Rb
Prof. Kevin Jones and others
Optics Express, 17, 16722 (2009)
We present experimental results showing that quantum correlated light can
be produced using non-degenerate, off-resonant, four-wave mixing (4WM) on both
the D1 (795 nm) and D2 (780 nm) lines of 85Rb and 87Rb, extending earlier work
on the D1 line of 85Rb. Using this 4WM process in a hot vapor cell to produce
bright twin beams, we characterize the degree of intensity-difference noise
reduction below the standard quantum limit for each of the four systems.
Although each system approximates a double-lambda configuration, differences in
details of the actual level structure lead to varying degrees of noise
reduction. The observation of quantum correlations on light produced using all
four of these systems, regardless of their substructure, suggests that it should
be possible to use other systems with similar level structures in order to
produce narrow frequency, non-classical beams at a particular wavelength.
Low-Noise Amplification of a Continuous-Variable Quantum
State
Prof. Kevin Jones and others
In Quantum Information and Computation VIII, (SPIE Conference
Proceedings 77020) (April 2010)
We present an experimental realization of a low-noise, phase-insensitive
optical amplifier using a four-wave mixing interaction in hot Rb vapor.
Performance near the quantum limit for a range of amplifier gains, including
near unity, can be achieved. Such low-noise amplifiers are essential for
so-called quantum cloning machines and are useful in quantum information
networks and protocols. We demonstrate that amplification and
“cloning” of one half of a two mode squeezed state is possible while
preserving entanglement. The inseparability criterion between the two original
modes remains satisfied for small to large gains, while the EPR criterion is
satisfied for a smaller range. This amplification of quantum correlations paves
the way for optimal cloning of a bipartite entangled state.
Measurement of Hyperfine Structure within the
6P3/2 Excited State of 115In
Mevan Gunawardena, Huajie Cao ’09, Paul Hess
’08, and P.K. Majumder
Phys. Rev A 80, 032519 (2009)
Using a two-step, two-color laser spectroscopy technique, we have completed
a measurement of the hyperfine structure within the 5s26p
2P3/2 excited state in 115In I=9/2. A frequency
stabilized GaN diode laser at 410 nm is locked to the
5P1/2→6S1/2 ground-state transition and a
second 1291 nm diode laser is scanned over the
6S1/2→6P3/2 transition to produce
hyperfine spectra for the 6P3/2F=3,4,5,6 manifold. We find
the hyperfine splittings of consecutive sublevels to be as follows:
4−3=275.2542 MHz, 5−4=384.0571 MHz, and 6−5=517.4844 MHz. The
magnetic dipole, electric quadrupole, and magnetic octupole hyperfine coupling
constants derived from these three splittings are, respectively, a=79.337 MHz,
b=62.55 MHz, and c=−0.044 MHz. The measured value of the dipole constant,
a, agrees to within 2% with a recent theoretical prediction.
Quantum Information: Circuits that Process with
Magic
R. W. Simmonds and F. W. Strauch
Nature 460, 187 (2009)
Practical quantum computation will require a scalable, robust system to
generate and process information with precise control. This is now possible
using a superconducting circuit and a little quantum magic.
Multifrequency Control Pulses for Multilevel
Superconducting Quantum Circuits
A. M. Forney, S. R. Jackson, and F. W. Strauch
Physical Review A 81, 012306 (2010)
Superconducting quantum circuits, such as the superconducting phase qubit,
have multiple quantum states that can interfere with ideal qubit operation. The
use of multiple frequency control pulses, resonant with the energy differences
of the multi-state system, is theoretically explored. An analytical method to
design such control pulses is developed, using a generalization of the Floquet
method to multiple frequency controls. This method is applicable to optimizing
the control of both superconducting qubits and qudits, and is found to be in
excellent agreement with time-dependent numerical simulations.
Arbitrary Control of Entanglement between Two
Superconducting Resonators
F. W. Strauch, K. Jacobs, and R. W. Simmonds
Physical Review Letters (in press, 2010)
We present a method to synthesize an arbitrary quantum state of two
superconducting resonators. This state-synthesis algorithm utilizes a coherent
interaction of each resonator with a tunable artificial atom to create entangled
quantum superpositions of photon number (Fock) states in the resonators. We
theoretically analyze this approach, showing that it can efficiently synthesize
NOON states, with large photon numbers, using existing technology.
Entanglement and Composite Bosons
Christopher Chudzicki ’10, Olufolajimi Oke ’10,
and William K. Wootters
Physical Review Letters 104, 070402 (1010)
We build upon work by C. K. Law [Phys. Rev. A 71, 034306 (2005)] to show in
general that the entanglement between two fermions largely determines the extent
to which the pair behaves like an elementary boson. Specifically, we derive
upper and lower bounds on a quantity χN+1/χN
that governs the bosonic character of a pair of fermions when N such pairs
approximately share the same wavefunction. Our bounds depend on the purity of
the single-particle density matrix, an indicator of entanglement, and
demonstrate that if the entanglement is sufficiently strong, the quantity
χN+1/χN approaches its ideal bosonic
value.
PSYCHOLOGY
The Development of Defense Mechanisms from
Pre-Adolescence to Early Adulthood:
Do IQ and Social Class Matter? A
Longitudinal Study
Phebe Cramer, Professor of Psychology
Journal of Research in Personality, 43, 464-471 (2009)
The defense use of participants in the Berkeley Guidance Study,
Intergenerational Studies, University of California, Berkeley, was traced
longitudinally from pre-adolescence (n = 130) to early adulthood (n = 120). As
coded from their TAT stories using the Defense Mechanism Manual (DMM: Cramer,
1991a), the results showed change in defense use at adulthood. Consistent with
previous findings, the defense of Projection was used more frequently than
Denial at both ages. However, in adulthood there was a decline in the salience
of Identification and an increase in the salience of Denial. This change in
defense use between pre-adolescence and early adulthood was predicted by both
childhood IQ and social class.
An Increase in Early Adolescent Undercontrol is
Associated with the Use of Denial
Phebe Cramer, Professor of Psychology
Journal of Personality Assessment, 91, 331-339 (2009)
A longitudinal study of change in undercontrol, and its relation to the use
of defense mechanisms, was studied with participants from the Berkeley Guidance
Study of the Institute of Human Development, University of California, Berkeley.
It was predicted that use of the immature defense of Denial, but not Projection
or Identification, would be related at early adolescence to an increase in
undercontrol as assessed from two independent measures. The assessment of Ego
Undercontrol indicated that the majority of children decreased with age, but for
those who increased at early adolescence, the increase was significantly related
to the use of Denial. Similarly, assessment of Externalizing Behavior Problems
at early adolescence indicated that an increase in Externalizing Problems was
related to the use of Denial. It is suggested that in addition to indicating
psychological immaturity, the use of Denial prevents these children from
recognizing the negative impact of their undercontrolled behavior.
Defense Mechanisms and Self-Doubt
Phebe Cramer, Professor of Psychology
In R. M. Arkin, K.C. Oleson & P.J. Carroll (Eds.) Handbook of the
Uncertain Self, New York : Psychology Press (2009)
This chapter demonstrates how the experience of self-doubt may result in an
increased use of defense mechanisms. In contrast to other more conscious
strategies for protecting the self, defense mechanisms operate outside of
awareness. In fact, awareness of their functioning may render them ineffective.
In this essay, we focus on three defenses – denial, projection, and
identification – that represent a hierarchy of defense development.
Whether demonstrated by the observed relation between self-doubt and defense
use, as in correlational studies, or shown by experimental studies in which a
controlled intervention was used to create self-doubt, there is convincing
evidence that the use of defense mechanisms is yet one more way in which the
self is protected.
Taking from Those That Have More and Giving to Those That
Have Less:
How Inequity Frames Affect Corrections for
Inequity
Brian S. Lowery, Rosalind M. Chow, Jennifer Randall Crosby,
Assistant Professor of Psychology
Journal of Experimental Social Psychology, 45, 375-378 (2009)
Most theories of inequity focus on relative inequity. In contrast, this
paper provides evidence that individuals infer what people should have (i.e. an
absolute standard) from the way inequity is described. In the reported
experiment, participants give more to a subordinate actor when inequity is
described in terms of “less than” rather than “more
than,” and take more from a dominant actor when inequity is described in
terms of “more than” rather than “less than,” even
though the magnitude of inequity is constant. Mediational analyses suggest that
these differences are driven by changes in individuals’ perceptions of
what the actors should have (i.e. the standard). We conclude by discussing the
implications for motivated perceptions of inequity and redistributive policy
attitudes.
Anger and Approach Motivation in Infancy: Relations to
Early Childhood Inhibitory Control
and Behavior Problems
Jie He, Kathryn A. Degnan, Jennifer M. McDermott, Heather A.
Henderson, Amie Ashley Hane, Assistant Professor of Psychology, Qinmei Xu, &
Nathan A. Fox
Infancy, 15, 246-269 (2010)
The relations among infant anger reactivity, approach behavior, and frontal
electroencephalogram (EEG) asymmetry, and their relations to inhibitory control
and behavior problems in early childhood were examined within the context of a
longitudinal study of temperament. Two hundred nine infants' anger expressions
to arm restraint were observed at 4 months of age. Infants' approach behaviors
during play with an unpredictable toy and baseline frontal EEG asymmetry were
assessed at 9 months of age. Inhibitory control during a Go/No-Go task and
parent report of behavior problems were evaluated at 4 years of age. High
anger-prone infants with left, but not right, frontal EEG asymmetry showed
significantly more approach behaviors and less inhibitory control relative to
less anger-prone infants. Although a link between anger proneness in infancy and
behavior problems in early childhood was not found, a combination of low
approach behaviors and poor inhibitory control was predictive of internalizing
behaviors.
One-Year Temporal Stability of Delay-Discount
Rates
K. N. Kirby, Professor of Psychology
Psychonomic Bulletin & Review, 16, 457-462 (2009)
The temporal stability of delay-discount rates for monetary rewards was
assessed using a monetary choice questionnaire (Kirby & Marakovic, 1996). Of
100 undergraduate participants who completed the questionnaire at the initial
session, 81 returned 5 weeks later and 46 returned 57 weeks later for subsequent
sessions. The 5-week test–retest stability of discount rates was .77 (95%
confidence interval 5 .67–.85), the 1-year stability was .71
(.50–.84), and the 57-week stability was .63 (.41–.77). Thus, at
least when similar testing situations are reinstated, discount rates as
individual differences have 1-year stabilities in the range that is typically
obtained for personality traits. Discount rates index an attribute of the person
that is relatively stable over time but that is moderated by aspects of the
situation, such as reward type and deprivational state.
The Hierarchical Structure of Self-Reported
Impulsivity
K. N. Kirby, Professor of Psychology, J. Finch,
’97
Personality and Individual Differences, 48, 704-713
(2010)
The hierarchical structure of 95 self-reported impulsivity items, along
with delay–discount rates for money, was examined. A large sample of
college students participated in the study (N = 407). Items rep- resented
every previously proposed dimension of self-reported impulsivity. Exploratory
PCA yielded at least seven interpretable components: Prepared/Careful,
Impetuous, Divertible, Thrill and Risk Seeking, Happy-Go-Lucky, Impatiently
Pleasure Seeking, and Reserved. Discount rates loaded on Impatiently Plea- sure
Seeking, and correlated with the impulsiveness and venturesomeness scales from
the I7 (Eysenck, Pearson, Easting, & Allsopp,
1985). The hierarchical emergence of the components was explored, and we
show how this hierarchical structure may help organize conflicting dimensions
found in previous analyses. Finally, we argue that the discounting model
(Ainslie, 1975) provides a qualitative framework for
understanding the dimensions of impulsivity.
A Really Hard Test Really Helps Learning
Nate Kornell, Assistant Professor of Psychology & Sam
Kornell
> (2010)
Challenging tests and falling short may be hard on the ego, but they can do
more than mere studying for eventually getting it right.
Metacognition in Humans and Animals
Nate Kornell, Assistant Professor of Psychology
Current Directions in Psychological Science, 18, 11-15 (2009)
It has long been assumed that metacognition—thinking about one's own
thoughts—is a uniquely human ability. Yet a decade of research suggests
that, like humans, other animals can differentiate between what they know and
what they do not know. They opt out of difficult trials; they avoid tests they
are unlikely to answer correctly; and they make riskier "bets" when their
memories are accurate than they do when their memories are inaccurate. These
feats are simultaneously impressive and, by human standards, somewhat limited;
new evidence suggests, however, that animals can generalize metacognitive
judgments to new contexts and seek more information when they are unsure.
Metacognition is intriguing, in part, because of parallels with self-reflection
and conscious awareness. Consciousness appears to be consistent with, but not
required by, the abilities animals have demonstrated thus far.
Optimizing Learning Using Flashcards: Spacing is More
Effective Than Cramming
Nate Kornell, Assistant Professor of Psychology
Applied Cognitive Psychology, 23, 1297-13 (2009)
The spacing effect - that is, the benefit of spacing learning events apart
rather than massing them together - has been demonstrated in hundreds of
experiments, but is not well known to educators or learners. I investigated the
spacing effect in the realistic context of flashcard use. Learners often divide
flashcards into relatively small stacks, but compared to a large stack, small
stacks decrease the spacing between study trials. In three experiments,
participants used a web-based study programme to learn GRE-type word pairs.
Studying one large stack of flashcards (i.e. spacing) was more effective than
studying four smaller stacks of flashcards separately (i.e. massing). Spacing
was also more effective than cramming - that is, massing study on the last day
before the test. Across experiments, spacing was more effective than massing for
90% of the participants, yet after the first study session, 72% of the
participants believed that massing had been more effective than spacing.
A Stability Bias in Human Memory: Overestimating
Remembering
and Underestimating Learning
Nate Kornell, Assistant Professor of Psychology & R. A.
Bjork
Journal of Experimental Psychology: General, 138, 449-468
(2009)
The dynamics of human memory are complex and often unintuitive, but certain
features—such as the fact that studying results in learning—seem
like common knowledge. In 12 experiments, however, participants who were told
they would be allowed to study a list of word pairs between 1 and 4 times and
then take a cued-recall test predicted little or no learning across trials,
notwithstanding their large increases in actual learning. When queried directly,
the participants espoused the belief that studying results in learning, but they
showed little evidence of that belief in the actual task. These findings, when
combined with A. Koriat, R. A. Bjork, L. Sheffer, and S. K. Bar’s (2004)
research on judgments of forgetting, suggest a stability bias in human
memory—that is, a tendency to assume that the accessibility of one’s
memories will remain relatively stable over time rather than benefiting from
future learning or suffering from future forgetting.
Unsuccessful Retrieval Attempts Enhance Subsequent
Learning
Nate Kornell, Assistant Professor of Psychology, M. J. Hays,
& R. A. Bjork
Journal of Experimental Psychology: Learning, Memory, & Cognition,
35, 989-998 (2009)
Taking tests enhances learning. But what happens when one cannot answer a
test question—does an unsuccessful retrieval attempt impede future
learning or enhance it? The authors examined this question using materials that
ensured that retrieval attempts would be unsuccessful. In Experiments 1 and 2,
participants were asked fictional general-knowledge questions (e.g., “What
peace treaty ended the Calumet War?”). In Experiments 3–6,
participants were shown a cue word (e.g., whale) and were asked to guess a weak
associate (e.g., mammal); the rare trials on which participants guessed the
correct response were excluded from the analyses. In the test condition,
participants attempted to answer the question before being shown the answer; in
the read-only condition, the question and answer were presented together.
Unsuccessful retrieval attempts enhanced learning with both types of materials.
These results demonstrate that retrieval attempts enhance future learning; they
also suggest that taking challenging tests—instead of avoiding
errors—may be one key to effective learning.
What Monkeys Can Tell Us about Metacognition and
Mindreading (Commentary)
Nate Kornell, Assistant Professor of Psychology, B. L.
Schwartz, & L. K. Son
Behavioral and Brain Sciences, 32, 150-151 (2009)
Thinkers in related fields such as philosophy, psychology, and education
define metacognition in a variety of different ways. Based on an emerging
standard definition in psychology, we present evidence for metacognition in
animals, and argue that mindreading and metacognition are largely
orthogonal.
Learners’ Choices and Beliefs about
Self-Testing
Nate Kornell, Assistant Professor of Psychology & L. K.
Son
Memory, 17, 493-501 (2009)
Students have to make scores of practical decisions when they study. We
investigated the effectiveness of, and beliefs underlying, one such practical
decision: the decision to test oneself while studying. Using a flashcards-like
procedure, participants studied lists of word pairs. On the second of two study
trials, participants either saw the entire pair again (pair mode) or saw the cue
and attempted to generate the target (test mode). Participants were asked either
to rate the effectiveness of each study mode (Experiment 1) or to choose between
the two modes (Experiment 2). The results demonstrated a mismatch between
metacognitive beliefs and study choices: Participants (incorrectly) judged that
the pair mode resulted in the most learning, but chose the test mode most
frequently. A post-experimental questionnaire suggested that self-testing was
motivated by a desire to diagnose learning rather than a desire to improve
learning.
Delayed versus Immediate Feedback in Children’s and
Adults’ Vocabulary Learning
J. Metcalfe, N. Kornell, Assistant Professor of Psychology,
& B. Finn
Memory & Cognition, 37, 1077-1087 (2009)
We investigated whether the superior memory performance sometimes seen with
delayed rather than immediate feedback was attributable to the shorter retention
interval (or lag to test) from the last presentation of the correct information
in the delayed condition. Whether lag to test was controlled or not, delayed
feedback produced better final test performance than did immediate feedback,
which in turn produced better performance than did no feedback at all, when we
tested Grade 6 children learning school-relevant vocabulary. With college
students learning GRE-level words, however, delayed feedback produced better
performance than did immediate feedback (and both were better than no feedback)
when lag to test was uncontrolled, but there was no difference between the
delayed and immediate feedback conditions when the lag to test was
controlled
The Pretesting Effect: Do Unsuccessful Retrieval Attempts
Enhance Learning?
L. E. Richland, N. Kornell, Assistant Professor of
Psychology, & L. S. Kao
Journal of Experimental Psychology: Applied, 15, 243-257
(2009)
Testing previously studied information enhances long-term memory,
particularly when the information is successfully retrieved from memory. The
authors examined the effect of unsuccessful retrieval attempts on learning.
Participants in 5 experiments read an essay about vision. In the test
condition, they were asked about embedded concepts before reading the passage;
in the extended study condition, they were given a longer time to read the
passage. To distinguish the effects of testing from attention direction, the
authors emphasized the tested concepts in both conditions, using italics or
bolded keywords or, in Experiment 5, by presenting the questions but not asking
participants to answer them before reading the passage. Posttest performance was
better in the test condition than in the extended study condition in all
experiments—a pretesting effect—even though only items that were not
successfully retrieved on the pretest were analyzed. The testing effect appears
to be attributable, in part, to the role unsuccessful tests play in enhancing
future learning.
Simultaneous Decisions at Study: Time Allocation,
Ordering, and Spacing
L. K. Son & Nate Kornell, Assistant Professor of
Psychology
Metacognition and Learning, 4, 237-248 (2009)
Learners of all ages face complex decisions about how to study effectively.
Here we investigated three such decisions made in concert—time allocation,
ordering, and spacing. First, college students were presented with, and made
judgments of learning about, 16 word-synonym pairs. Then, when presented with
all 16 pairs, they created their own study schedule by choosing when and how
long to study each item. The results indicated that (a) the most study time was
allocated to difficult items, (b) relatively easy items tended to be studied
first, and (c) participants spaced their study at a rate significantly greater
than chance. The spacing data, which are of particular interest, differ from
previous findings that have suggested that people, including adults, believe
massing is more effective than spacing.
Police-Induced
Confessions: Risk Factors and Recommendations
Saul M. Kassin, Professor of Psychology, S. A. Drizin, T.
Grisso, G. H. Gudjonsson, R. A. Leo, & A. D. Redlich
Law and Human Behavior, 34, 3-38 (2010)
Recent DNA exonerations have shed light on the problem that people
sometimes confess to crimes they did not commit. Drawing on police practices,
laws concerning the admissibility of confession evidence, core principles of
psychology, and forensic studies involving multiple methodologies, this White
Paper summarizes what is known about police-induced confessions. In this review,
we identify suspect characteristics (e.g., adolescence; intellectual disability;
mental illness, and certain personality traits), interrogation tactics (e.g.,
excessive interrogation time; presentations of false evidence; minimization),
and the phenomenology of innocence (e.g., the tendency to waive Miranda
rights) that influence confessions, as well as their effects on judges and
juries. This article concludes with a strong recommendation for the mandatory
electronic recording of interrogations and considers other possibilities for the
reform of interrogation practices and the protection of vulnerable suspect
populations. [This is an official White Paper of the American Psychology-Law
Society)
Police-Induced Confessions, Risk Factors and
Recommendations: Looking Ahead
Saul M. Kassin, Professor of Psychology, S. A. Drizin, T.
Grisso, G. H. Gudjonsson, R. A. Leo, & A. D. Redlich
Law and Human Behavior, 34, 49-52 (2010)
Reviewing the literature on police-induced confessions, we identified
suspect characteristics and interrogation tactics that influence confessions and
their effects on juries. We concluded with a call for the mandatory electronic
recording of interrogations and a consideration of other possible reforms. The
preceding commentaries make important substantive points that can lead us
forward—on the effects of videotaping of interrogations on case
dispositions; on the study of non-custodial methods, such as the controversial
Mr. Big technique; and on an analysis of why confessions, once withdrawn, elicit
such intractable responses compared to statements given by child and adult
victims. Toward these ends, we hope that this issue provides a platform for
future research aimed at improving the diagnostic value of confession
evidence.
Interviewing Suspects: Practice, Science, and Future
Directions
Saul M. Kassin, Professor of Psychology, S. C. Appleby,
& J. T. Perillo
Legal and Criminological Psychology, 15, 39-55 (2010)
Crime suspects in the U.S. are typically questioned in a two-step process
aimed, first, at behavioral lie detection during a pre-interrogation interview,
followed by the elicitation of a confession during the interrogation itself (in
Great Britain, the practice of investigative interviewing does not make this
sharp distinction). Research conducted on the first step shows that police
investigators often target innocent people for interrogation because of
erroneous but confident judgments of deception. Research on the second step
shows that innocent people are sometimes induced to confess to crimes they did
not commit as a function of certain dispositional vulnerabilities or the use of
overly persuasive interrogation tactics. Citing recent studies, this paper
proposes that laboratory paradigms be used to help build more diagnostic models
of interrogation. Substantively, we suggest that the British PEACE approach to
investigative interviewing may provide a potentially effective alternative to
the classic American interrogation. As a matter of policy, we suggest that the
videotaping of entire interrogations from a balanced camera perspective is
necessary to improve the fact finding accuracy of judges and juries who must
evaluate confession evidence in court.
Inside Interrogation: Why Innocent People
Confess
Saul M. Kassin, Professor of Psychology
American Journal of Trial Advocacy, 32, 525-539 (2009)
The last twenty years has seen an explosion of research on issues that
combine psychology and law. This Article discusses the legal and psychological
phenomenon of false confession. The author begins by discussing several well
known cases of false confession. He then addresses the issue of why innocent
people are initially targeted for interrogation and offers an in depth analysis
of the interrogation process.
On the Presumption of Evidentiary Independence: Can
Confessions Corrupt Eyewitness Identifications?
L. E. Hasel, & Saul M. Kassin, Professor of
Psychology
Psychological Science, 20, 122-126 (2009)
A confession is potent evidence, persuasive to judges and juries. Is it
possible that a confession can also affect other evidence? The present study
tested the hypothesis that a confession will alter eyewitness identification
decisions. Two days after witnessing a theft and making an identification
decision from a blank lineup, participants were told that certain lineup members
had confessed or denied guilt during a subsequent interrogation. Among those who
had made a selection but were told that another lineup member confessed, 61%
changed their identifications—and did so with confidence. Among those who
had not made an identification, 50% went on to select the confessor when his
identity was known. These findings challenge the presumption in law that
different forms of evidence are independent and suggest an important overlooked
mechanism by which innocent confessors are wrongfully convicted—namely,
that potentially exculpatory evidence is corrupted by the confession
itself.
Effects of Gestational Allopregnanolone Administration in
Rats Bred for High Affective Behavior
Betty Zimmerberg, Professor of Psychology, Ashley R.
Martinez ’09, Carolyn M. Skudder ’07, Elizabeth Y. Killien ’06
Shivon A. Robinson ’11 & Susan A. Brunelli
Physiology and Behavior, 99, 212-217 (2010)
The anxiolytic neurosteroid allopregnanolone
(3α-hydroxy-5α-pregnan-20-one or 3α,5α-THP) has been
proposed to play a developmental role in emergent neural regulation of affective
behavior. This experiment examined whether allopregnanolone administered during
the last week of gestation in rats would alter neonatal and adult offspring
behaviors in the selectively-bred High vocalizing line, who have low levels of
allopregnanolone and high levels of anxious/depressive behaviors. Dams were
injected twice a day with the neurosteroid or vehicle, or handled as controls,
and were tested on the elevated plus maze just before parturition. Maternal
behavior was assessed throughout the first week of life, and affective behavior
in the offspring was tested at one week of age (ultrasonic vocalizations test)
and as adults (plus maze and forced swim tests). Offspring prenatally exposed
to allopregnanolone were less anxious as neonates and less depressed as adults
compared to both control groups. Only male adult offspring, however, revealed
less anxious behavior on the plus maze. Neither prenatal anxiety nor postnatal
maternal behavior was affected by gestational allopregnanolone, suggesting that
this prenatal exposure had a direct, long-lasting effect on the developing fetal
brain independent of mediating maternal factors. These results are discussed in
light of new evidence about the developmental role of the GABA-A receptor
prenatally.