In part 3 of our investigation of the most documented UFO in history, we hear from verified witnesses, explore how debunkers were misled, and consider forms of magnetic levitation similar to how the UFO is thought to have flown.
In part two of our multipart investigation into the most unusual UFO case in U.S. history, we learn about the project from the inside, based on verified witness accounts and the story of a whistleblower with a long story to tell.
Today we begin the first of a multipart investigation into the most unusual UFO case in U.S. history, “debunked” several times, but part of a secret project built around electromagnetic levitation.
Continuing from the first part of our exploration of Betty Andreasson Luca’s abduction experiences, we arrive at the second part, much longer than the first, which will outline the phases of her experience, examine some of the stranger aspects of the case; consider the case of body-separation (i.e., out of body experiences); see how the ET’s meddled in Betty’s affairs, and ponder the ironic contradiction in her role of “messenger” (as she was told) versus the inexplicability and secrecy that has enveloped the abduction activities and outcomes in her life. Lastly, the article shares additional reading by three authors who are trained in research and whose books compiled patterns of experience with dozens of abductees – perhaps this is the best we can hope for in exploring the greatest continuing mystery in modern times.
The best-known cases of UFO abduction seem to be ones with a lot of drama. The canonical case (perhaps because it was the first such to be documented) was that of Betty and Barney Hill, a young couple driving in their home state of New Hampshire in 1961, when, after noticing lights overhead, their vehicle lost power and they were forced onboard a hovering craft, for several procedures and a dire warning about the future of humankind. It was extensively researched (see, for example, Stanton Friedman and Kathleen Marden’s Captured!: The Betty and Barney Hill UFO Experience The True Story of the World‘s First Documented Alien Abduction).
However, this case is, for me, an abridged version of the multiple abductions experienced by Betty Andreasson, a humble housewife and mother of seven living in north central Massachusetts. She endured multiple visitations, and was able to communicated with the beings, to observe their craft, and even be taken to the home world from which the beings said they came. Yet, you’ve probably never heard of her.
Hyperspeed UAPs Pose Questions about their Physical Interaction with the Environment
Photographic evidence and anecdotal calculations of the movement of many UAPs is part of a physically complex interaction of various forces. One rarely considered question in these observations is how air friction affects the motion and condition of these objects if they are moving at hyperspeed. How can an object flying at thousands of miles per hour mitigate the effects of air friction? Let us consider a method by which their levitation can also be potentially explained.
Let us begin by accepting that the study of ancient civilizations is nothing if not beguiling. There is something elusive and tantalizing about what our distant past must have been like. On one hand, it has led directly to who we are today, but on the other, to look upon the work of our genetic forbears makes us question how much of them remains in us, so alien does their life seem to our own. We have geology to blame for many secrets buried in our distant archaeological and anthropological past. Earth shifts have perennially kept these out of our view, making our own past the most tantalizing puzzle we have ever pondered.
Everything about this mystery is also of interest to humanistic science disciplines like paleoanthropology, geology, and most of all, archaeology, which owes its existence to our lingering ignorance concerning who we were and how we lived. But that hasn’t stopped certain fields of inquiry from constructing – and in some cases from refusing to construct – a worldview of that unseen past. Archaeology – literally translated as “the study of the old” – is one field where researchers operate like forensic investigators, looking for physical evidence that will make a “case” – typically, the case is a research question such as, “Did the earliest inhabitants of this valley live in settlements?” or “When did humans first use fire?” Given these and many other riddles, one way to approach any solution is through methods of scientific inquiry, something that many fields do today. Science’s methods are the envy of other disciplines, which have been increasingly copying and incorporating them into their own questions. But does using scientific methods or tools convert a field into a science?
Let’s take the question to kindergarten: in order to become a science, a field of study must meet three conditions: let’s call them focus, method, and reproducibility. The narrow way in which these terms are applied in science makes clear that these are conditions that other fields of study, such as art, do not (need to) fulfill. In terms of focus, any science – mathematics, biology, astronomy, and so on – investigates a topic or activity where the causes, changes, or processes under observation seem to behave systematically. The systematic nature of what is observed is what allows science to select one method – or many – to investigate it. The scientific method means that this field will develop research questions, build hypotheses around the questions, develop ways to test the hypotheses, and will then analyze the data to successfully calculate the outcome of future cases. In so doing, science aims to fulfill the third condition: reproducibility – it should predict the future, given similar conditions as those studied thus far.
Because it relies on the similarity of phenomena observed time and again, doing science is not impossibly difficult; technically, the work is relatively straightforward. In practice, we might start by developing a collection of samples of something. Sampling from many similar cases is important because, by using the correct analytic methods, science will be able to predict an abstract pattern from it. By contrast, the arts, fueled by random and other creative processes, cannot do this: we can sample any 1,000 sequentially made paintings by Picasso (the best estimate is that he made 13,500 of them), yet no mathematical or other systematic method will allow us to predict how his next painting would have looked. That is why, despite having many, many samples available for analysis, art history is not a science. Sampling, as we see, is centrally about collecting many instances which must be much more similar to each other than dissimilar.
Using this context, it becomes evident that archaeology is not really a science. It does employ tools and instruments to understand the past through human fossil, artifact, and geological evidence, but the sites and cultural remnants available are too scarce and too unique to serve as samples that support true scientific reasoning. This sampling crisis is why archaeology’s knowledge is unsteady over time, and finds itself revising many of its theories more frequently than other disciplines, often reworking large components of its evolutionary hypotheses each time a new site is excavated. For archaeology, what science can offer is insufficient – although the opposite is the case And, once science is put aside, speculation necessarily takes over. Shards, bones, and relics can be carbon dated and cataloged scientifically. But this is meager archival; the Holy Grail of archaeology is to uncover worlds. Yet this paucity of clues about the worlds it investigates also makes it incapable of prediction, since, as we know, merely employing technical instrumentation doesn’t automatically promote a field to the status of a true science. Astrology, whose predictive power is no worse than that of archaeology, has been denied the status of a science for centuries, yet it has exclusively and for a long time employed computer models and astronomical data. (Perhaps now would be a prematurely provocative moment to mention The Tenacious Mars Effect, the detailed account by Suitbert Ertel and Kenneth Irving of Michel Gauquelin, a French statistician who in the 1950’s aimed to debunk astrology by sustained mathematical inference. Instead, he discovered a strong correlation between sports champions and the position of the planet Mars in a person’s birth chart. While Archaeology awaits its Michel Gauquelin, let us rejoin our initial train of thought).
Astrology, of course, has no scarcity of stars and celestial bodies to contemplate (nor, regrettably, of uninformed yet opinionated detractors who expostulate, for that matter). Because of its pervasive lack of sample-friendly data, however, archaeology can only produce theories about human existence either from loose objects without evidence of a collective of people (as with the cave paintings in Lascaux, France and Altamira, Spain) or from what correspond to settlements of people living and working together.
Everything begins with loose objects and samples, and at some point in time, humans – or their ancestors – coalesce into a territorial grouping. How and when did this first happen? The archaeological “party line” on the formation of human collectives is this: the first settlements emerged toward the end of the Neolithic period, somewhere between 4,000 and 10,000 years ago. These “proto-cities” were characterized by farming, domestication of animals, and the abandonment of a hunter-gatherer way of life . Thus, in this depiction, our predecessors lived on earth for millions of years, when eventually, modern humans emerged 300,000 years ago, but became modern some time between 160,000 and 60,000 years ago (“modern” in the sense of developing symbolic behavior, planning, and abstract reasoning). With this time scale, do we really believe that settlements didn’t exist until about 10,000 years ago?
The problem clearly is that we are relying on archaeology’s timeline, which given millions of years of human evolution about which it says virtually nothing, is biased toward the “too modern” and overly centered on the Neolithic period. If we want to understand why human existence has been such a mystery for so long, we need to look back further than 10,000 years ago, where it really lived out the majority of its history.
Any remote period of time will do as our point of departure, but let’s take the very beginning of the Stone Age, around 3.4 million years ago, with the earliest evidence of tool use by human ancestors.
Figure 1. hominin species Australopithecus afarensis, approximately 3.3 million years old, Ethiopia
Importantly, this timestamp is accepted because, among other cases, it was in the year 2000 that the most complete-ever skeleton of a hominin (a common ape/human precursor, and thus a direct human ancestor) was found. It was a 3.3-million-year-old Australopithecus afarensis fossil in the Afar region of Northeastern Ethiopia, and as excavated finds go, this one revolutionized the study of human origins in two surprising ways. Near the skeleton were also found fossilized bones whose marks suggested that they had been fashioned into tools, from which scientists have been compelled to change the assumed earliest date of human tool use. Here, then, was evidence of tool use about a million years earlier than scholars had previously stipulated. Furthermore, subsequent analysis of the skeleton’s shoulder blades showed that hominins were climbing trees for food and/or shelter for far longer than was previously assumed. There is thus no sense of when humans, or their predecessors, became hunter-gatherers, but as of three million years ago, they were living in trees, and were not migratory. [1]
Let us take a closer look at the tool revelation. Perhaps it’s hard to imagine gear being used by our genetic forerunners three million years ago, but this wasn’t the only discovery of such use corresponding to the timeframe of the Ethiopian skeleton discovery. Give or take a few hundred thousand years, other positive signs of tool use date to approximately similar periods in various other locations (Kenya, Israel, South Africa, among others). Thus, triangulating similar discoveries at various archaeological sites, we can fix the dawn of the Stone Age (inaugurated and so-named to signal the first use of tools) to a little over three million years ago, much earlier than was previously believed. Even so, both fossil and non-fossil records on the Stone Age are riddled with gaps in data, understanding, and explanations, even of rudimentary but sensible questions, such as how certain tools could have been fashioned so perfectly by a people without machinery, as in the case of this 4.7-inch hole stone found in Finland, which bears a beveled cavity whose diameter is flawlessly round:
Figure 2. Neolithic Age hole stone, Finland.
Conventionally, this humble object, one of countless cataloged in the archaeology of Europe and other continents, could be well over a million years old, but dating it in general is impossible, not least because, despite the vast time interval of the Neolithic epoch, European archaeology has been able to find no transitional markers (which are very important to that discipline) to serve as reference points in human development, and, to complicate matters, the archaeology of other lands utilizes different markers and transitions. In any case, taken together, we still have the three great epochs of human evolution as are believed to have happened presumably everywhere on earth: firstly the Stone Age, which runs over three million years in duration, and then the two periods that followed it (Bronze Age and Iron Age) which together account for (only) about 20,000 years.
Is something not quite right in this lopsided timeline of human evolutionary periods? The ascent of tools in the archaeological record being one area of interest, we can also ask about more significant time markers, like the birth of cities. On large questions like this, the field’s gaze narrows considerably, postulating that communal life has been possible only in the last ten or so millenia.
The problem with this too-short and too-recent time span becomes clearer when we step out of archaeology (which does not consider the comparative zoology of other animals). Apes have demonstrated monogamous grouping and other communal life patterns for millions of years, and the earliest-known hominoids – great ape primates – date from about 36.6 million years ago. So, we are to believe, apes settled, and were not hunter-gatherers, while humans, arriving much later, were? Conventional archaeological scholarship holds onto this argument because there isn’t evidence of human settlements prior to about 10,000 years ago, and it isn’t keen on speculating about encampments, colonies, proto-cities, homesteads, much less actual cities before then. Surely this time period will eventually recede with newer evidence of prehistoric city-like dwellings. One reason for believing this is in the record of similar modifications to archaeological and paleontological models relating to other facets of human activity. Another is the speculative work of Graham Hancock, whose questions point to missing information about human evolution sometime between the Stone and the Bronze ages.
Marking the first demonstrated control of fire, for example, which relatively modern humans acquired (but earlier species did not) also underwent a backward progression, when the earliest known instance of fire use, at the Qesem Cave in Israel somewhere between 300,000 and 400,000 years ago was upended by a succession of startling finds. First discovered was evidence of campfires established roughly one million years ago in the Wonderwerk Cave in South Africa, and nearby as well [2], and then in Kenya’s FxJj20 Site complex in Koobi Fora, where soil samples and spectrographic analysis of potlid and certain fragments indicated the use of controlled fire seemingly for cooking, dating back 1.5 million years. [3] And again, we can expect the date when humans first used fire to recede even further in time: the study’s lead author, Sarah Hlubik, is evaluating fire use during the Early Pleistocene, which goes back as far as 2.5 million years ago. [4] If we wish to predict (pseudo)scientifically, the pattern is clear: science continues to encounter gradual evidence of human existence and progress dating back to earlier and earlier periods. [5]
Even so, as mentioned, archaeology seems much less enthusiastic about the possibility of complex civilizations before Younger Dryas, an epoch of cataclysmic geological disruptions around 13,000 years ago, as scholars like Graham Hancock have stated for years. This kind of speculation is heresy to archaeologists, because – to repeat – there isn’t direct physical evidence of such vanished cultures. But what of indirect evidence? This is where Hancock’s focus lies, but it will be difficult to get to this, because his work is currently being contorted by archaeologists themselves, who are offended by his ideas. Rather than engaging what they disagree with, they are instead preferring to imply that he is a racist whilst hurling other ad hominem attacks in a manner that scarcely seems intellectual. In that sense, they are proving Hancock’s point on Ancient Apocalypse series on Netflix, namely, that archaeology’s interests and worldview are basically fixed at this point, and the field will not allow dialogue with anyone whose position diverts from that. For the record, Hancock’s claims and logic follow two different lines of reasoning that meet at the nexus of one large question, so let’s examine these.
As one example of Hancock’s probing of direct evidence that humans settled into communal complexes before 10,000 years ago, consider the case of Göbekli Tepe, a large multi-structure site atop a mound in Turkey discovered around three decades ago, and dating back to the heart of the Neolithic Age, around 11,000 years ago. Klaus Schmidt, the site’s primary European archaeologist has asserted the belief supported by archaeology, judging from the support that his view continues to receive, judging from this recent article from the journal of the Archaeological Institute of America:
The buildings and their multiton pillars, along with smaller, rectangular structures higher on the slope of the hill, were monumental communal buildings erected by people at a time before they had established permanent settlements, engaged in agriculture, or bred domesticated animals. Schmidt did not believe that anyone had ever lived at the site. He suggested that, in the Neolithic period between 9500 and 8200 B.C., bands of nomads had come together regularly to set up stone circles and carve pillars, and then deliberately covered them up with the rocks, gravel, and other rubble he found filling in the various enclosures. [6]
Hancock’s problem with this view should also be the problem that archaeologists ought to have with it as well: Neolithic Age people were hunter-gatherers and thus not yet supposedly living in settlements, cities, or in any physically fixed structures, so, where would hunter gatherers, a people whose way of life preceded the design of such settlements, a people who, because of their nomadic ways, would never have had reason to build anything, have acquired (and practiced) the kind of architectural proficiency and mastery of construction that built this complex site? It doesn’t seem to me that this line of inquiry should be cause for insulting Hancock; rather, it is a sensible question to ask. The problem for archaeologists is that it can only be answered by revising what archaeology claims about Neolithic people, for, perhaps they weren’t so maladroit and primitive as they are implied to have been, for the aforementioned archaeological journal goes on to report that
Schmidt posited that both the construction and abandonment of what he called “special enclosures” had been accompanied by great feasts of local game washed down with beer brewed from wild grasses and grains. Those who gathered for these periodic monumental building projects scattered before coming back decades or centuries later to do it all again. [7]
And yet, no one is asking any questions about this portrayal.
But, let us ask directly: what is so offensive about what Hancock says of this archaeological site? Observe the mapping of the enclosures thus far excavated (estimated to be represent only 10% of the complex, most of which still remains buried), as presented in Hancock’s Netflix documentary, Ancient Apocalypse.
Figure 3. Site map of Göbekli Tepe. Source: Ancient Apocalypse, Netflix.
In the scene that introduces this Göbekli Tepe, the camera shows the site up close, Hancock narrates that “Usually the more we practice something, the better we get at it.” At minute 8:39 of episode 5, the camera then turns to show us modern quarry men using power equipment to cut stone blocks in the hills around the site itself, he adds that
We assume that ancient cultures must have worked the same way, improving their skills over time. But Göbekli Tepe and in particular Enclosure D, seem to turn this assumption upside down. How did a community of Stone Age hunter-gatherers succeed so brilliantly at building with megaliths at their very first attempt? Isn’t it time to consider the possibility that the Great megalithic enclosures weren’t some miraculous overnight invention of hunter-gatherers, but we’re a legacy from a precociously advanced lost civilization of prehistory? This is a notion which mainstream archaeologists find almost offensive. Academic scholars have got locked in to a particular framework that during the Ice Age, the entire human population of the Earth was at the hunter-gatherer stage.
There is, to any reasonable mind, nothing heretical here. As we see from this article of the Archaeological Institute of America, Hancock is merely repeating what they themselves claim.
Besides the inconvenient questions, Hancock’s second approach is to conclude that we must speculate about the origins of structures which do not fit the conventional archaeological timeline of human abilities and skill sets. Even though there are many similar examples of Hancock’s speculative thought being currently deformed by archaeologists because they feel he hasn’t been deferential to the party line, it bears mentioning that, factually, Hancock doesn’t ever directly state or claim that archaeology is wrong. Instead, as we have seen, he argues that the field has overlooked certain key details that ought to have been factored into the overall narrative of what could have motivated the construction of a given site, settlement, or sculptural monument – as any investigator in a crime scene would, from specialized training, have known to do. But let us be brutally honest: what archaeology believes about Göbekli Tepe, the largest archaeological site in the world, being built by beer guzzling feasters is not just potentially wrong, it’s also physically impossible. Professional pride and umbrage being what it is, no archaeologist I’m able to cite has addressed Hancock’s actual observations as he has presented them. These are the kinds of overlooked details, Hancock points out, which do not add up to the archaeological portrait of the people who either lived in or constructed the sites explored in each episode of Ancient Apocalypse.
Now, let’s turn to the question of absence of physical evidence, since this is the basis of archaeology’s problems with Hancock’s speculation. But wait – that’s exactly what speculationis: thinking of what might be or might have been possible in the absence of evidence to prove how what is might have happened. And archaeologists – just like other social scientists – have been known to indulge a little speculation themselves. For example, let’s examine this (Getty-licensable) image portraying a typical Stone Age man in a setting which may seem familiar:
Figure 4. Drawing Depicting Men and Habitat of the Neolithic[9]
and, while at this point in our analysis, also its older version, another caricature of Neolithic people typified in this 1897 rendition:
We need but quickly glance these ludicrous vignettes to see how many misguided social science speculations came baked into these kinds of crude portrayals, for there is no evidence that Stone Age people dressed or looked like these loony inebriates.
And today, scholars should know better than to concoct irresponsibly conceived imagery of people. A good start would merely ask logical questions like, “Can such people in such renderings – unkempt, barely clothed and so crude-looking – really have resembled the master designers and builders of Göbekli Tepe?” If anything seems exasperating, and perhaps racist (a charge perplexingly leveled at Hancock, who has never made any comparative statements on race), it would be these kinds of condescending portrayals of what, judging from the physical record of what was built, were mathematically informed minds resourceful in measuring, carving, and balancing heavy stones atop one another – and, as each enclosure at the excavated site makes evident – aligning the structures to the star Sirius. All told, Hancock offers the only coherent theory to the causality behind this level of engineering: the builders and their expertise came from somewhere else. Now, where it came from is not clear, but they certainly were not hunter-gatherers.
But another contrafactual to physical evidence is that with Younger Dryas, which coincides in time with the rise of this temple site, the earth underwent much greater temperature extremes for many centuries than we have seen since the beginning of the Industrial Revolution. For the length of the Younger Dryas geological period, the earth roiled continuously under sudden, wide, and extended spikes and dips in global temperatures, landslides, floods (from melting polar ice), and other intense climatological shifts would have caused submerging or destruction of much land surface. The idea of existing civilization being buried in such cataclysms may be the stuff of hypothesis, but the weather extremes that altered the physical landscape of the planet were not, and thus, the two cannot be realistically separated from each other.
And so, it is quite problematic that archaeology can concoct what Neolithic man looked like and dressed, down to his or her hair, yet denies that physical evidence could have disappeared under not one but several geological conflagrations, given that,, even after being conservatively revised downward, the ice core data from Greenland shows that the earth warmed then, plus or minus five, degrees Celsius in the period of a few centuries. [11]
Figure 5. The dark band in this ice core from the West Antarctic Ice Sheet Divide (WAIS Divide) is a layer of volcanic ash that settled on the ice sheet approximately 21,000 years ago.[12]
For comparison, modern earth’s drastic weather patterns are tied to a less than 1 degree change in global temperature over a century, as this climate change report shows:
Table 1. Global temperature anomalies through 2021, compared to the 1951-1980 average[13]
No question that colossal and prolonged geological upheavals around 13,000 years ago wrought correspondingly formidable physical changes to the earth’s surface, and what now emerges as the most difficult question is: How is it possible to find actual sites and relics in the periods before and during Younger Dryas? After many cataclysmic earth changes have taken place over thirteen millenia, we could more specifically ask: where are the likeliest places to locate physical remnants of prehistoric civilizations? Places that meet two conditions would seem the most promising: current urban centers built on the thickest bedrock. And why is that?
We know that several cities today such as Mexico City sit atop older, archaic structures and, for various reasons, once a community of people settles in one location long enough for it to grow into a metropolis, they rarely leave, unless some sudden or gradual natural event renders living conditions there impossible. Prevailingly the cause for permanent abandonment will be a natural disaster, as in the case of Pompeii, the Roman metropolis which one day in 79 A.D. disappeared, being completely buried under meters of ash and pumice from a massive eruption of the volcano atop Mount Vesuvius. One might rationally conclude that there’s no real difference between natural and man-made disasters as determinants of whether people who leave a destroyed city would return, but natural events are more likely to cause the permanent disappearance of cities. A case in point is that of Hiroshima, which boasted a population of 350,000 residents in 1945, when in August of that year, the US dropped a 9,000 lb. nuclear bomb upon it, resulting in 135,000 casualties [14] Hiroshima was left both incinerated and radioactive, and, if ever there was reason to abandon a place forever, this was the most forceful. Yet today, less than eighty years later, Hiroshima thrives with what has to have been a record-breaking rise in population, to 1.2 million residents. Many similarly sized cities – none of which suffered such devastation – experienced similar growth, but only over a century or longer, as in the case of San Francisco. [15]
Pompeii, as the emblematic case of natural upheaval, and Hiroshima, the most catastrophic story of man-made change – makes us ask about other examples of each: are humans equally hardy under geology’s wrath in other cases? No, they are not. We might consider the ill-fated Italian town of Craco, which was continuously inhabited for three thousand years, even establishing a university in 1276, and flourishing normally, until landslides, followed by a flood and an earthquake led to its abandonment in 1980. No one has returned to it.
Similarly, Plymouth, the abandoned capital of the Caribbean town of Montserrat, is visually difficult to find: as with Pompeii, volcanic eruptions – two of them – swallowed the city less than thirty years ago, and it lies mostly buried under ash, its ground is now too soft and silty to support even a small road. In less than a century, it will not scarcely be a memory [16]
Taken together, Hiroshima’s decisive recovery and resiliency on one hand, and Pompeii’s, Craco’s and Plymouth’s abrupt and permanent finales on the other, tell us one thing about the difference between large-scale man-made versus natural destruction: artificial disruption is often reversible, whereas geological upheaval becomes permanent and over time, and scarcely leaves clues or marks. Expanding this to a timescale of 13,000 years to match the duration of the Younger Dryas temperature contrasts, we should accept that the effect of natural changes, including geological shifts, not over one century but over 130, means that much remains missing from the human story of civilization, with geology’s burial of the evidence as the perpetrator of this amnesia.
And so, with or without the support of conventional archaeological thinking, it is clear from all these perspectives that, caught in the maelstrom of Younger Dryas, archaeological continuity would have evaporated under geological holocausts, after which people are unlikely to have returned, or even remembered what progress developed until then.
Brain, C. K., and A. Sillent. “Evidence from the Swartkrans Cave for the Earliest Use of Fire.” Nature 336, no. 6198 (December 1988): 464–66. https://doi.org/10.1038/336464a0.
Getty Images. “Cite Lacustre, 1897. By French Artist and Illustrator, Fernand Cormon…” Accessed February 2, 2023. https://www.gettyimages.com/detail/news-photo/cite-lacustre-1897-by-french-artist-and-illustrator-fernand-news-photo/860477026.
Getty Images. “Drawing Depicting Men and Habitat of the Neolithic.” Accessed February 2, 2023. https://www.gettyimages.com/detail/news-photo/drawing-depicting-men-and-habitat-of-the-neolithic-news-photo/89169391.
Freedman, Andrew. “The Most Startling Facts in 2021 Climate Report.” Axios, January 14, 2022. https://www.axios.com/2022/01/14/earth-warmer-climate-change-report.
“Greenland Ice May Exaggerate Magnitude of 13,000-Year-Old Deep Freeze.” Accessed February 2, 2023. https://news.wisc.edu/greenland-ice-may-exaggerate-magnitude-of-13000-year-old-deep-freeze/.
Hancock, Graham. “Ancient Apocalypse (Netflix Official Site).” Netflix.com. Accessed February 2, 2023. https://www.netflix.com/title/81211003.
Hlubik, Sarah, Russell Cutts, David R. Braun, Francesco Berna, Craig S. Feibel, and John W. K. Harris. “Hominin Fire Use in the Okote Member at Koobi Fora, Kenya: New Evidence for the Old Debate.” Journal of Human Evolution 133 (August 1, 2019): 214–29. https://doi.org/10.1016/j.jhevol.2019.01.010.
Kaplan, Matt. “Million-Year-Old Ash Hints at Origins of Cooking.” Nature, April 2, 2012. https://doi.org/10.1038/nature.2012.10372.
“Last Stand of the Hunter-Gatherers? – Archaeology Magazine.” Accessed February 2, 2023. https://www.archaeology.org/issues/422-2105/features/9591-turkey-gobekli-tepe-hunter-gatherer.
“Montserrat’s Archaeology and History: Important Dates and Sites | Archaeology at Brown.” Accessed February 2, 2023. https://blogs.brown.edu/archaeology/fieldwork/montserrat/montserrats-archaeological-resources/.
“Ethiopia: The Dikika Research Project.” California Academy of Sciences. Accessed February 2, 2023, https://www.calacademy.org/learn-explore/scientific-expeditions/ethiopia-the-dikika-research-project.
“San Francisco, California Population History | 1860 – 2022.” Accessed February 2, 2023. https://www.biggestuscities.com/city/san-francisco-california.
“Sarah Hlubik | Center for the Advanced Study of Human Paleobiology | The George Washington University.” Accessed February 2, 2023. https://cashp.columbian.gwu.edu/sarah-hlubik.
“Total Casualties | The Atomic Bombings of Hiroshima and Nagasaki | Historical Documents | Atomicarchive.Com.” Accessed February 2, 2023. https://www.atomicarchive.com/resources/documents/med/med_chp10.html.
To work in the world of art is to embrace few strict rules, even in a forest of ideologies. One of the apparent constants, however, could be articulated by an equation, or more accurately, an equivalence relation that sustains between the ideas of art as one term, and freedom as the other. The artist is free to create openly, the collector is free to gather or dispose of a collection freely, the viewer is free to engage, interpret, or dismiss work. There is, however, one region of the art world where this equivalence is more problematic, less value-free, in fact, less free, than elsewhere. It happens in the role of the curator.
Much has been made, correctly, I think, of the blows to apparent neutrality that curatorial activity implies. The sequence of work that underlies the exhibition sets into motion not merely the procession of examples aligned within a singular theme — usually perceptual or by school of practice — but by necessity, also the universe of assumptions that the curator reads into this motorcade. Such assumptions, always inferred because they are never articulated, color every constructive dimension within the challenge of erecting a show, down to the selection of wall colors, with the added complication that presentation and representation occur in different discursive worlds. For, what is presented in one space, namely, the gallery, museum or other venue for exhibition, is typically critiqued in another, normally, the newspaper, journal, or other venue for discussion.
Embroiled in this asymmetry, one new media affordance provides a third kind of venue in the form of the virtual gallery, of which there is much to choose from, to include artsteps, a service hosting an entire marketplace of gallery templates like the one above. But conceptually, we should want to ponder the notion of roles and attributes of this kind of meta-venue. Correspondingly, one survey of such software brings several thoughts and questions to mind. To begin with, we might ask why a virtual gallery makes sense, given that the very non-physical status of the virtual world and its “objects on display” makes feasible the reduction of gallery experience to an array of URLs with descriptive captions. In this minimal possibility, the gallery would be entirely replaced by its function as the basis for a show, something that is in turn the product of curatorial statement. But these virtual galleries do not merely (re)present work in digital space, they reproduce the gallery itself as an object of reception. Why, then, is it necessary to envision artwork in a simulated gallery?
To be sure, the display of a single work, sculptural, filmic, or painterly is an experience already fraught with loss in translation from the physical to the virtual. We know that when a work — even a sculptural one — is brought into the digital screen, several facts take place. It retains all of its recognizable features as a medium; the work does not fail to register as an instance of sculpture, for example. To the inverse extent, however, that the notion of a work’s medium is preserved, its materiality is entirely lost within digital mediation. Gone is the weft in the canvas, the nick in the marble, in fact all evidence of the work as a process and its evolution toward becoming a product disappears. Here, while we can see the work — often with greater clarity — we can no longer touch it with our eyes.
But this transformation, because it is shared by all digital reflections of material objects, can be neither an endorsement nor indictment of the virtual gallery situation, and is therefore not part of the rationale for such a mechanism as the virtual gallery. The answer lies elsewhere, for the experience of an exhibit revolves not around the presence of images but rather their adjacency within a singular spatial context. The theatrical contribution of a gallery simulation is necessary not to the individual integrity of work but to the curatorial operation that accompanies the collective property of works in a show. Beyond being possessed of its autonomous character, every artwork suggests itself in this plural connotation as well. The walls and spaces that provide the reinforcing ground for the ad hoc collection make the sense of a curated show possible, a supporting reality that lends more to each work than critics of “the gallery system” can account for, even with the rather stilted simulated space of the virtual gallery, one of the software genres that replaces architecture with interface design.
When I lived in Spain as a child, I recall the stories of people (gypsies often) working in the mines in Andalucia, the southern province of the country. Who would want to work in mines? Nobody, of course, but the poorest of the poor. These mines had nothing for safety standards, and every year many workers would be lost. Even up north in Madrid, we heard about the perils of these terrible places. This progressive house track’s title is a tribute to that place, and that memory.
Of course, it’s not the same as the flamenco genre known as “La Minera” which is a dirge, sung a capella and always with a tone of mourning for a lost soul, as I write progressive house music. Love and respect to our my brothers and sisters in the region, and to lovers of music everywhere.
The Dragonfly UFO Incidents, Part 3 April 17, 2023
In part 3 of our investigation of the most documented UFO in history, we hear from verified witnesses, explore how debunkers were misled, and consider forms of magnetic levitation similar to how the UFO is thought to have flown.
Read the article:
https://franciscoricardo.substack.com/p/the-dragonfly-ufo-incidents-part-fca