#LoC: #CloudComputing; #AndrewWMellonFoundation; CCHC; #InformationDissemination
Washington/Canadian-Media: With aims to better serve research and creative uses of Library of Congress (LoC) resources, the Andrew W. Mellon Foundation funded $1 million grant in 2019 for the Computing Cultural Heritage in the Cloud (CCHC) initiative, LoC reported.
Cloud Computing. Image credit: Unsplash
CCHC would use the affordances of cloud-based technology to document what is required to support this work–from levels of staff support and costs associated with serving and transforming digital materials.
LoC Labs has partnered this year with three scholars Lincoln Mullen, Lauren Tilton, and Andromeda Yelton, who would explore the Library’s digital collections by using cloud computing services in their individual research projects.
With impressively varied in their aims, Mullen attempts to use machine learning to extract biblical quotations across the Library’s collections; while Tilton seeks to refine and design computer vision by examining approximately 250,000 early 20th century images; and Yelton plans work with clusters conceptually similar documents to create an interactive data visualization to support users who only have a rough idea of the items they’re looking for.
In addition, to engage audiences in transforming access to knowledge, public humanities focus would be used by each of these projects.
LoC would also be informed collectively by these projects about the understanding of the benefits and challenges of using distributed computing environments in large-scale digital library settings.
Results from the individual projects will be documented and shared openly to complement the findings from the institution’s overarching investigation.
In an interviews with Alice Goldfarb who has joined the LC Labs team as an Innovation Specialist for CCHC, Leah Weinryb-Grohsgal, Innovation Specialist at the Library of Congress, and works at the CCHC, Alice said that her work at CCHC is to determine the requirements in a service model for supporting cloud computing digital humanities research in the future to further explore the changes required in to disseminate collections to more people in more ways and build on and contribute to the work other people are doing.
Due to the vast the scale of the Library’s collections, said Alice, they would be benefited to disseminate the collections available for cloud computing as libraries already consider the ethics of this type of work, and we want to make sure to extend this approach to digital work and learn ways to steward and share data in systematic ways digitally.
#AI; #ProteinStructure; #ScienceAndResearch; #CASP
New York/Canadian-Media: Proteins are the minions of life, working alone or together to build, manage, fuel, protect, and eventually destroy cells. To function, these long chains of amino acids twist and fold and intertwine into complex shapes that can be slow, even impossible, to decipher.
Image: A new artificial intelligence program readily predicts the structure of protein complexes, such as the immune signal interleukin-12 (blue) bound to its receptor. Image credit: Ian Haydon/Institute for protein design
Scientists have dreamed of simply predicting a protein’s shape from its amino acid sequence—an ability that would open a world of insights into the workings of life.
“This problem has been around for 50 years; lots of people have broken their head on it,” says John Moult, a structural biologist at the University of Maryland, Shady Grove. But a practical solution is in their grasp.
Several months ago, in a result hailed as a turning point, computational biologists showed that artificial intelligence (AI) could accurately predict protein shapes. That group describes its approach online in Nature today. Meanwhile, David Baker and Minkyung Baek at the University of Washington, Seattle, and their colleagues present their AI-based structure prediction approach online in Science. Their method works on not just simple proteins, but also complexes of proteins.
Baker’s and Baek’s method and computer code have been available for weeks, and the team has already used it to model more than 4500 protein sequences submitted by other researchers. Savvas Savvides, a structural biologist at Ghent University, had tried six times to model a problematic protein. He says Baker’s and Baek’s program, called RoseTTAFold, “paved the way to a structure solution.”
In fall of 2020, DeepMind, a U.K.-based AI company owned by Google, wowed the field with its structure predictions in a biennial competition. Called Critical Assessment of Protein Structure Prediction (CASP), the competition uses structures newly determined using laborious lab techniques such as x-ray crystallography as benchmarks. DeepMind’s program, AlphaFold2, did “really extraordinary things [predicting] protein structures with atomic accuracy,” says Moult, who organizes CASP.
But for many structural biologists, AlphaFold2 was a tease: “Incredibly exciting but also very frustrating,” says David Agard, a structural biophysicist at the University of California, San Francisco. In mid-June, 3 days after the Baker lab posted its RoseTTAFold preprint, Demis Hassabis, DeepMind’s CEO, tweeted that AlphaFold2’s details were under review at a publication and the company would provide “broad free access to AlphaFold for the scientific community.” Nature has now rushed to publish that paper to coincide with the Science paper. “It is appropriate that it is not coming out after ours, as our work is really based on their advances,” Baker says.
DeepMind’s 30-minute presentation at CASP had been enough to inspire Baek to develop her own approach. Like AlphaFold2, it uses AI’s ability to discern patterns in vast databases of examples, generating ever more informed and accurate iterations as it learns. When given a new protein to model, RoseTTAFold proceeds along multiple “tracks.” One compares the protein’s amino acid sequence with all similar sequences in protein databases. Another predicts pairwise interactions between amino acids within the protein, and a third compiles the putative 3D structure. The program bounces among the tracks to refine the model, using the output of each one to update the others. DeepMind’s approach involves just two tracks.
Gira Bhabha, a cell and structural biologist at New York University School of Medicine, says both methods work well. “Both the DeepMind and Baker lab advances are phenomenal and will change how we can use protein structure predictions to advance biology,” she says. A DeepMind spokesperson wrote in an email, “It’s great to see examples such as this where the protein folding community is building on AlphaFold to work towards our shared goal of increasing our understanding of structural biology.”
But AlphaFold2 solved the structures of only single proteins, whereas RoseTTAFold has also predicted complexes, such as the structure of the immune molecule interleukin-12 latched onto its receptor. Many biological functions depend on protein-protein interactions, says Torsten Schwede, a computational structural biologist at the University of Basel. “The ability to handle protein-protein complexes directly from sequence information makes it extremely attractive for many questions in biomedical research.”
Baker concedes that AlphaFold2’s structures are more accurate. But Savvides says the Baker lab’s approach better captures “the essence and particularities of protein structure,” such as identifying strings of atoms sticking out of the sides of the protein—features key to interactions between proteins. Last year, AlphaFold2 needed a lot of computing power to work, more than RoseTTAFold. “Now, it seems they’ve accelerated their method since CASP14, and it’s now comparable to RoseTTAFold,” Baek says.
Beginning on 1 June, Baker and Baek began to challenge their method by asking researchers to send in their most baffling protein sequences. Fifty-six head scratchers arrived in the first month, all of which have now predicted structures. Agard’s group sent in an amino acid sequence with no known similar proteins. Within hours, his group got a protein model back “that probably saved us a year of work,” Agard says. Now, he and his team know where to mutate the protein to test ideas about how it functions.
Because Baek’s and Baker’s group has released its computer code on the web, others can improve on it; the code has been downloaded 250 times since 1 July. “Many researchers will build their own structure prediction methods upon Baker’s work,” says Jinbo Xu, a computational structural biologist at the Toyota Technological Institute at Chicago. Hassabis says its computer code is now also open source. As a result of both groups’ work, progress should now be swift, Moult says: “When there’s a breakthrough like this, 2 years later, everyone is doing it as well if not better than before.”
#ArtOfPrinting; #tyoeDesigning; #MakingBooks, #RussellMaret
New York/Canadian-Media: Russell Maret, a book artist, type designer and private-press printer working in New York City describes in this post -- which first appeared in the the Library of Congress (LoC) Magazine -- his passion in the magic of making books to transform world, LoC reported.
Russell Maret. Photo: Annie Schlechter.
Two of the earliest-known pieces of European printing, said Maret were made with moveable metal type and the Gutenberg Bible, widely considered one of the most beautiful books ever printed. (The Library’s copy is one of three perfect vellum copies known to exist.)
These two objects, moveable metal type and the Gutenberg Bible, continued Maret constitute what we now call as a book art.
These two objects form an amorphous field populated by printers, papermakers, type designers, engravers and bookbinder and craftspeople.
Each branch of the book arts, similar to any creative field tries to make something out of these base materials of paper, lead and ink that is greater than the sum of its parts.
Printing is a permanent transformation, which is both technical and existential which literally transforms a blank piece of paper into a messenger of ideas.
Permanence in printing is relative, in as much as the permanence of an idea is subject to the shifting interpretations of time (The earth is the center of the universe!). And books, as we all know, can be burned.
In 1989, when Maret 18 years old, he said he inked up a printing press and pulled a proof for the first time and from that instant he was determined to do printing and over 30 years later, he is still determined to do it better.
In 1996 he designed a typeface and since then type design and alphabetical form have become the primary focus of his work. They map new pathways for me to pursue in my books.
Making a book involves, said Maret hard physical work, a high level of attentiveness and, ideally, a willingness to reevaluate and change with with the excitement of permanence and transformation while being aware of that one’s efforts might fall short of both.
#Irvine study; #mediaviolenece; #mediaexposure; #ScienceAdvances
New York, Apr 23 (Canadian-Media): Repeated exposure to media coverage of collective traumas, such as mass shootings or natural disasters, can fuel a cycle of distress, according to a University of California, Irvine study.
Researchers found that individuals can become more emotionally responsive to news reports of subsequent incidents, resulting in heightened anxiety and worry about future occurrences.
The report appears in Science Advances, a peer-reviewed, multidisciplinary, open-access journal published by the American Association for the Advancement of Science.
“It’s natural for people to experience feelings of concern and uncertainty when a terrorist attack or a devastating hurricane occurs,” said senior author Roxane Cohen Silver, UC Irvine professor of psychological science. “Media coverage of these events, fueled by the 24-hour news cycle and proliferation of mobile technologies, is often repetitious and can contain graphic images, video and sensationalized stories, extending the impact to populations beyond those directly involved.”
Earlier research has shown that consumption of media coverage of a collective trauma is a rational response for individuals seeking information as a way to mitigate their apprehension and cope with their stress. However, this strategy may backfire. According to this new study, repeated exposure to explicit content may exacerbate fear about future adversities, which promotes future media consumption and greater anxiety when they do occur. There is an even greater risk of falling into this pattern for those who have experienced violence in their lives or have been diagnosed with mental health ailments.
“The cycle of media exposure and distress appears to have downstream implications for public health as well,” said Rebecca R. Thompson, a UC Irvine postdoctoral scholar in psychological science and lead author of the report. “Repeated exposure to news coverage of collective traumas has been linked to poor mental health consequences — such as flashbacks — in the immediate aftermath and posttraumatic stress responses and physical health problems over time, even among individuals who did not directly experience the event.”
A national longitudinal study of more than 4,000 U.S. residents was conducted by Thompson, Silver and their colleagues over a three-year period following the 2013 Boston Marathon bombings and the 2016 massacre at the Pulse nightclub in Orlando, Florida. Participants were surveyed four times, enabling the team to capture responses to both tragedies and examine how responses to the first incident affected reactions to news coverage of the second.
“Our findings suggest that media organizations should seek to balance the sensationalistic aspects of their coverage, such as providing more informational accounts as opposed to lengthy descriptions of carnage, as they work to inform the public about breaking news events,” Silver said. “This may lessen the impact of exposure to one event, reducing the likelihood of increased worry and media-seeking behavior for subsequent events.”
Also conducting the study were Nickolas M. Jones, former UC Irvine psychological science doctoral student, and E. Alison Holman, UC Irvine associate professor of nursing. Project funding was provided by National Science foundation grants BCS-1342637, BCS-385 1451812 and BCS-1650792.
#MIT; #AI; #MITTechnologyReview; #KellgrenLawrenceGrade; #NIH
New York/Canadian-Media: A new study shows how training deep-learning models on patient outcomes could help reveal gaps in existing medical knowledge, Karen Hao, the senior AI reporter at MIT Technology Review reported.
Image: Measuring pain scale. Image credit: MIT Technology
In the last few years, research has shown that deep learning can match expert-level performance in medical imaging tasks like early cancer detection and eye disease diagnosis. But there’s also cause for caution. Other research has shown that deep learning has a tendency to perpetuate discrimination. With a health-care system already riddled with disparities, sloppy applications of deep learning could make that worse.
Now a new paper published in Nature Medicine is proposing a way to develop medical algorithms that might help reverse, rather than exacerbate, existing inequality. The key, says Ziad Obermeyer, an associate professor at UC Berkeley who oversaw the research, is to stop training algorithms to match human expert performance.
The paper looks at a specific clinical example of the disparities that exist in the treatment of knee osteoarthritis, an ailment which causes chronic pain. Assessing the severity of that pain helps doctors prescribe the right treatment, including physical therapy, medication, or surgery. This is traditionally done by a radiologist reviewing an x-ray of the knee and scoring the patient’s pain on the Kellgren–Lawrence grade (KLG), which calculates pain levels based on the presence of different radiographic features, like the degree of missing cartilage or structural damage.
But data collected by the National Institute of Health found that doctors using this method systematically score Black patients’ pain as far as far less severe than what they say they’re experiencing. Patients self-report their pain levels using a survey that asks how much it hurts to do various things, such as fully straightening their knee. But these self-reported pain levels are ignored in favor of the radiologist’s KLG score when prescribing treatment. In other words, Black patients who show the same amount of missing cartilage as white patients self-report higher levels of pain.
This has consistently miffed medical experts. One hypothesis is that Black patients could be reporting higher levels of pain in order to get doctors to treat them more seriously. But there’s an alternative explanation. The KLG methodology itself could be biased. It was developed several decades ago with white British populations. Some medical experts argue that the list of radiographic markers it tells clinicians to look for may not include all the possible physical sources of pain within a more diverse population. Put another way, there may be radiographic indicators of pain that appear more commonly in Black people that simply aren’t part of the KLG rubric.
#ScienceAndTechnology; #AI; #PVersusNPQuestion; #ComputerScience;
New York/Canadian-Media: Computers are good at answering questions. What's the shortest route from my house to Area 51? Is 8,675,309 a prime number? How many teaspoons in a tablespoon? For questions like these, they've got you covered.
This collection of dots and lines is the shortest traveling salesperson problem tour that passes through 1,000 points. Image Credit: William Cook et al., CC BY-ND
There are certain innocent-sounding questions, though, that computer scientists believe computers will never be able to answer—at least not within our lifetimes. These problems are the subject of the P versus NP question, which asks whether problems whose solutions can be checked quickly can also be solved quickly. P versus NP is such a fundamental question that either designing a fast algorithm for one of these hard problems or proving you can't would net you a cool million dollars in prize money.
My favorite hard problem is the traveling salesperson problem. Given a collection of cities, it asks: What is the most efficient route that visits all of them and returns to the starting city? To come up with practical answers in the real world, computer scientists use approximation algorithms, methods that don't solve these problems exactly but get close enough to be helpful. Until now, the best of these algorithms, developed in 1976, guaranteed that its answers would be no worse than 50% off from the best answer.
I work on approximation algorithms as a computer scientist. My collaborators Anna Karlin and Shayan Oveis Gharan and I have found a way to beat that 50% mark, though just barely. We were able to prove that a specific approximation algorithm puts a crack in this long-standing barrier, a finding that opens the way for more substantial improvements.
This is important for more than just planning routes. Any of these hard problems can be encoded in the traveling salesperson problem, and vice versa: Solve one and you've solved them all. You might say that these hard problems are all the same computational gremlin wearing different hats.
The best route is hard to find
The problem is usually stated as "find the shortest route." However, the most efficient solution can be based on a variety of quantities in the real world, such as time and cost, as well as distance.
To get a sense of why this problem is difficult, imagine the following situation: Someone gives you a list of 100 cities and the cost of plane, train and bus tickets between each pair of them. Do you think you could figure out the cheapest itinerary that visits them all?
Consider the sheer number of possible routes. If you have 100 cities you want to visit, the number of possible orders in which to visit them is 100 factorial, meaning 100 × 99 × 98 x … ×
1. This is larger than the number of atoms in the universe.
Going with good enough
Unfortunately, the fact that these problems are difficult does not stop them from coming up in the real world. Besides finding routes for traveling salespeople (or, these days, delivery trucks), the traveling salesperson problem has applications in many areas, from mapping genomes to designing circuit boards.
To solve real-world instances of this problem, practitioners do what humans have always done: Get solutions that might not be optimal but are good enough. It's OK if a salesperson takes a route that's a few miles longer than it has to be. No one cares too much if a circuit board takes a fraction of a second longer to manufacture or an Uber takes a few minutes longer to carry its passengers home.
Computer scientists have embraced "good enough" and for the past 50 years or so have been working on so-called approximation algorithms. These are procedures that run quickly and produce solutions that might not be optimal but are probably close to the best possible solution.
The long-reigning champ of approximation
One of the first and most famous approximation algorithms is for the traveling salesperson problem and is known as the Christofides-Serdyukov algorithm. It was designed in the 1970s by Nicos Christofides and, independently, by a Soviet mathematician named Anatoliy Serdyukov whose work was not widely known until recently.
The Christofides-Serdyukov algorithm is quite simple, at least as algorithms go. You can think of a traveling salesperson problem as a network in which each city is a node and each path between pairs of cities is an edge. Each edge is assigned a cost, for example the traveling time between the two cities. The algorithm first selects the cheapest set of edges that connect all the cities.
This, it turns out, is easy to do: You just keep adding the cheapest edge that connects a new city. However, this not a solution. After connecting all the cities, some might have an odd number of edges coming out of them, which doesn't make sense: Every time you enter a city with an edge, there should be a complementary edge you use to leave it. So the algorithm then adds the cheapest collection of edges that makes every city have an even number of edges and then uses this to produce a tour of the cities.
This algorithm runs quickly and always produces a solution that's at most 50% longer than the optimal one. So, if it produces a tour of 150 miles, it means that the best tour is no shorter than 100 miles.
Of course, there's no way to know exactly how close to optimal an approximation algorithm gets for a particular example without actually knowing the optimal solution—and once you know the optimal solution there's no need for the approximation algorithm! But it's possible to prove something about the worst-case scenario. For example, the Christofides-Serdyukov algorithm guarantees that it produces a tour that is at most 1.5 times the length of the shortest collection of edges connecting all the cities—and, therefore, at most 1.5 times the length of the optimal tour.
A really small improvement that's a big deal
Since the discovery of this algorithm in 1976, computer scientists had been unable to improve upon it at all. However, last summer my collaborators and I proved that a particular algorithm will, on average, produce a tour that is less than 49.99999% away from the optimal solution. I'm too ashamed to write out the the true number of 9s (there are a lot), but this nevertheless breaks the longstanding barrier of 50%.
The algorithm we analyzed is very similar to Christofides-Serdyukov. The only difference is that in the first step it picks a random collection of edges that connects all the cities and, on average, looks like a traveling salesperson problem tour. We use this randomness to show that we don't always get stuck where the previous algorithm did.
While our progress is small, we hope that other researchers will be inspired to take another look at this problem and make further progress. Often in our field, thresholds like 50% stand for a long time, and after the first blow they fall more quickly. One of our big hopes is that the understanding we gained about the traveling salesperson problem while proving this result will help spur progress.
Getting closer to perfect
There is another reason to be optimistic that we will see more progress within the next few years: We think the algorithm we analyzed, which was devised in 2010, may be much better than we were able to prove. Unlike Christofides' algorithm, which can be shown to have a hard limit of 50%, we suspect this algorithm may be as good as 33%.
Indeed, experimental results that compare the approximation algorithm to known optimal solutions suggest that the algorithm is quite good in practice, often returning a tour within a few percent of optimal.
#Google; #Maps; #Photos; #Globe
New York/Canadian-Media: The COVID-19 pandemic has brought about many changes, including business closures and updated hours for restaurants and stores.
CC0 Public Domain. Image credit: Unsplash
It has been ensured by Google has ensured that necessary changes to over 200 million places on their Maps application can be done any users with a Google account. Local Guides, a community of 150 million users around the globe who contribute to Maps updates are a newer development within Maps.
Maps experience has further been made easier for the users by Google's creation of a way for users to learn more about place options. This can be done by exploring an assortment of photos, reviews and updates about locations from all over the world.
For example, Android phone users can now use the Contribute tab in Maps to join the "Local Love Challenge" and write ratings and reviews as well as place location confirmations. The current goal stands at 100,000 recorded businesses. The Maps team plans to use the Local Love Challenge toward updating data on locations for countries covered in the future.
Another feature that would be added by the Maps team in the coming weeks is photo updates to share recent photos of visited places, which will facilitate users insight into not only a business's appearance and location but also the street view and various traffic conditions nearby.
For participation in photo updates, users can navigate to the Updates tab when viewing a place and select "upload a photo update". All users may upload as many photos as they wish as well as view any photos left by others in the Updates section.
In addition, routes within the Maps app can be revised by selecting the "Edit the map" feature and reporting a "Missing Road". Users of the can also add missing roads by drawing lines, change road directions, rename roads and delete or realign incorrectly named roads, inform the Maps team about road closures along with details such as dates and reasons.
In order to ensure the accuracy of user-provided data prior to publication, the team at Google will assess all updates made by the user.
While the updates feature is already available in over 80 countries, the new location photo features will become available over the coming months.
#Chemists; #Alchemists; #AfricanAmericanChemists
Washington/Canadian-Media: Being a vital part of society for hundreds of years, chemists with alchemists coming before them have contributed to unraveling the curiosity of the people about the elements and their fascinating properties to the understanding and betterment of our world.
Chemistry laboratory at Tuskegee Institute, ca. 1902. Library of Congress Prints & Photographs Division. //www.loc.gov/item/2014646471/
This article by the Library of Congress (LoC) highlights African American chemists Alice Ball, Norbert Rillieux, Marie Maynard Daly, and Percy Julius.
Growing up in Seattle, Alice Ball (1892-1916) earned two bachelor’s degrees from the University of Washington, one for pharmaceutical chemistry and one for pharmacy. After relocating to Hawaii, she became the first African American and woman to earn a master’s degree in chemistry at the College of Hawaii (known today as the University of Hawaii) and became the first female chemistry instructor at the University at the age of 23. She was also responsible for creating an injectable cure for leprosy patients by isolating the ethyl esters from the oil of Hydnocarpus wightianus, or chaulmoogra tree, seeds. Her work led to a treatment that was used until the 1940s and saved thousands of lives.
Social Hall for the Kalaupapa leper colony. Library of Congress Prints & Photographs Division. //www.loc.gov/pictures/item/hi0098.color.361561c/
Born in New Orleans and considered to be one of the earliest chemical engineers, Norbert Rillieux (1806-1894), became an instructor of applied mechanics at L’École Centrale des Arts et Manufactures, now part of Université Paris-Saclay, France.
Rillieux began researching a more efficient sugar refining process and moved back to Louisiana at the prospect of being the head engineer at a new sugar refinery and completed his research and was granted Patent No. US4879 in 1846, which explained his “new and useful Improvements in the Method of Heating, Evaporating, and Cooling Liquids, especially intended for the manufacture of sugar.” This innovation allowed for more efficient production and the use of less fuel. Fun fact: Rillieux is a cousin of Edgar Degas, the French impressionist painter.
Marie Maynard Daly (1921-2003), born and raised in the borough of Queens, New York, earned a bachelor’s degree in chemistry from Queens College and her master’s in chemistry from New York University. Based on her dissertation at Columbia University, “A Study of the Products Formed by the Action of Pancreatic Amylase on Corn Starch,” she became the first African American woman in the United States to earn a Ph.D. in chemistry.
Daly taught chemistry at Howard University, performed research on the metabolism of nucleic components at the Rockefeller Institute, and taught biochemistry at the College of Physicians and Surgeons of Columbia University, ultimately becoming a professor at the Albert Einstein College of Medicine. She was a prolific author on wide-ranging subjects and was published in highly regarded journals like the Journal of General Physiology, the Journal of Experimental Medicine, and the Journal of Clinical Investigation. These accomplishments are incredible, and even more so, she instituted a scholarship program in her parents’ name at Queens College for minority students eager to study science.
Well-known for his landmark synthesis of physostigmine, a compound that to this day is used in the treatment of glaucoma, Percy Julian (1899-1975) and his findings co-authored by Josef Pikl appeared in the Journal of the American Chemical Society v.57, no. 4.
Julian made enormous contributions to the field of medicinal chemistry and millions of people have benefitted from the research and brought him over 100 patents, including one for margarine!
Doing incredible work against just as incredible odds, these four Americans contributed not only to this country but the world. The following list of Internet and print resources is a good place to start learning more about them and their discoveries, as well as other amazing African American chemists.
#PlanetInnnerWorking; #Interpretation; # AthanasiusKircher; #Maps; #LibraryOfCongress
Washington/Canadian-Media: Athanasius Kircher, a scholar, scientist, and Jesuit priest based his theories about the world beneath his feet, and created a series of three maps from 1668 as part of his book, 'Mundus subterraneus' (Subterranean World) that show an impressive interpretation of the planet’s inner workings, Library of Congress (LoC) reports said.
These maps are housed in the LOC's Rare Book Division.
Scientists and storytellers have often wondered about the happenings under the surface of the Earth, and have come up with imaginative subterranean worlds.
Kircher thought that the subterranean world could explain the volcanic activity and the movements of the tides.
A complex system by which fire travels from the Earth’s core to its surface, breaking through via the eruptions of volcanoes (or montes Vulcanii, mountains of Vulcan, the Roman god of metalworking and fire) is explained by the first map, Systema Ideale Pyrophylaciorum.
Athanasius Kircher. Systema ideale pyrophylaciorum. 1668. Rare Book Division of LoC.
Shown on the map is a large central fire (ignis centralis) labeled A, with canals labeled C, and smaller lakes (aestuaria) of fire, labeled B.
The presence of lakes and rivers are similar to those found on the Earth’s surface with the difference that these canals are made of fire.
Clearly discernable on the map are paths leading from the central flame to volcanic eruptions around the world, with the smoke emerging from the volcanoes matching the swirling clouds surrounding the globe forming illusive images of the smoke and ash that accompany volcanic eruptions.
Kircher admits that there are gaps in his theory, though current science, he adds that the notion of a fiery Earth’s core is not entirely incorrect evidenced by the U.S. Energy Information Administration reports that the temperature of the inner core is as hot as the surface of the sun.
However, according to Kircher, besides fire or pyrophylacia, (fire-houses) traveling through the underground, there were also hydrophylacia, or “water-houses,” that interacted with the ignis centralis and also moved via canals and lakes.
Athanasius Kircher. Detail from System ideale qvo exprimitur, aquarum, showing two whirlpools on the ocean’s surface and the “mouths” at the base of two mountains. 1668. Rare Book Division of LoC.
With Kircher’s belief that the Earth’s interior is one of movement, he attributed the formation of tides to this interaction of water and fire under the surface of the earth could be destructive, too, causing whirlpools.
Water was pushed up through the surface at the base of mountains, the mouths of which can be seen on the map.
Athanasius Kircher. Detail from System ideale qvo exprimitur, aquarum, showing two whirlpools on the ocean’s surface and the “mouths” at the base of two mountains. 1668. Rare Book Division of LoC.
In addition to presenting a lively view of the subterranean, Kircher also made a map showing the effects of these underground systems of water and fire on the surface, with volcanoes (montes vulcanios) and whirlpools (abyssos) labeled, seen below.
Included in the map are common features of other early maps of territories that had newly been discovered by European colonial powers: California appears as a peninsula, and Australia is connected to Antarctica.
Australia/Antarctica is labeled on the map as “incognita,” or unknown as this region is devoid of volcanoes, which appear on every continent.
Also found in narrow passages and around the capes of continents, are whirlpools that underscore the many dangers of exploration.
Athanasius Kircher. Tabula geographico-hydrographica motus oceani,currents, abyssos, montes ignivomos. 1668. Rare Book Division of LoC.
Besides his geological theories, Kircher is also known for his other accomplishments including mapping the mythic island of Atlantis and pioneering studies in Egyptology.
Although his view of the underground is not believed today, these maps offer striking examples of how maps can be used for scientific purposes, both under the Earth and beyond it.
#Washington; #LibraryOfCongress; #Georeferencing; #rasterData; #SpatialReferenceInfo
Washington/Canadian-Media: The technology in the process of Georeferencing or adding digital spatial reference information to an otherwise non-spatial image is explained by Meagan Snow, Geospatial Data Visualization Librarian in the Geography and Map Division of the Library of Congress (LoC), LoC reports said.
Library of Congress. Image credit: Twitter handle
Addition of spatial reference information to a scanned map image facilitates the alignment of map image correctly with the geographic features it was built to represent.
This enables a user to layer any other spatial data file alongside (or on top of) their map image.
Snow makes use of the following 1967 map of the US Capitol grounds as an example.
Map showing properties under the jurisdiction of the Architect of the Capitol, 1967. Geography & Map Division, Library of Congress. Image credit: LoC
This map shows properties under the jurisdiction of the Architect of the Capitol in 1967. The Madison Building of the Library of Congress, home to the Geography & Map Division is missing in this map. The comparison of this 1967 map to today’s Capitol Hill Complex reveals how the area has changed over time.
Maps that are scanned as image files, explains Snow, and meets the criteria for what is called raster data: data composed of a continuous grid of cells (or pixels).
The fact that spatial data can commonly be stored in a raster format enables scanned map images to be loaded directly into GIS software without any file conversions needed.
It is the presence of “spatial reference information” in GIS software enables the Geographic data layers to align correctly when viewed through the software allows a user to manually add control points between the non-spatial scanned map image and a pre-existing GIS data layer that already has spatial reference information and displays correctly in GIS software.
Georeferencing tools (available in all of the most widely used GIS software options) provided by the software package they are using allows a user to place a control point by selecting a specific point on the scanned map image, and then selecting the exact same point on the GIS layer. Once the user adds a couple of control points, the scanned map image will begin to align with the existing data layer scale.
Beginning of the georeferencing process: two control points have been placed between the scanned map image and the current aerial imagery, bringing the scanned map image to the correct scale but not the correct placement.. Image credit: LoC
But for the rest of the map to be aligned, a user must continue to add control points, making sure they are well-distributed across the map image, until it is determined that the two layers are aligned properly. Here’s what the map looks like after 21 control points have been placed.
Completion of the georeferencing process: a number of well-distributed control points have been placed, bringing the scanned map image to both the correct scale and correct geographic placement. Image credit: LoC
The georeferencing can be saved after the completion of the process enabling the scanned map image to go to the right place in the world whenever it is loaded into GIS software.
In the lower right-hand corner of the map we can now see the current aerial footprint of the Madison Building where it was missing.
The 1967 map of the Capitol grounds is layered against current aerial imagery, showing 50 years of changes to the Capitol Complex, including the construction of the Library of Congress’ Madison Building. Image credit: LoC
A georeferencing a scanned image has many uses, primary reason being it allows a map user to view the map in geographic context with any number of other spatial data sources and further enables the user compare maps created at different scales or in different times periods.
The users are also facilitated to see an older map juxtaposed against current aerial imagery or spatial data as well as to use the scanned map image as a basis for the creation of new spatial datasets.