#ArtOfPrinting; #tyoeDesigning; #MakingBooks, #RussellMaret
New York/Canadian-Media: Russell Maret, a book artist, type designer and private-press printer working in New York City describes in this post -- which first appeared in the the Library of Congress (LoC) Magazine -- his passion in the magic of making books to transform world, LoC reported.
Russell Maret. Photo: Annie Schlechter.
Two of the earliest-known pieces of European printing, said Maret were made with moveable metal type and the Gutenberg Bible, widely considered one of the most beautiful books ever printed. (The Library’s copy is one of three perfect vellum copies known to exist.)
These two objects, moveable metal type and the Gutenberg Bible, continued Maret constitute what we now call as a book art.
These two objects form an amorphous field populated by printers, papermakers, type designers, engravers and bookbinder and craftspeople.
Each branch of the book arts, similar to any creative field tries to make something out of these base materials of paper, lead and ink that is greater than the sum of its parts.
Printing is a permanent transformation, which is both technical and existential which literally transforms a blank piece of paper into a messenger of ideas.
Permanence in printing is relative, in as much as the permanence of an idea is subject to the shifting interpretations of time (The earth is the center of the universe!). And books, as we all know, can be burned.
In 1989, when Maret 18 years old, he said he inked up a printing press and pulled a proof for the first time and from that instant he was determined to do printing and over 30 years later, he is still determined to do it better.
In 1996 he designed a typeface and since then type design and alphabetical form have become the primary focus of his work. They map new pathways for me to pursue in my books.
Making a book involves, said Maret hard physical work, a high level of attentiveness and, ideally, a willingness to reevaluate and change with with the excitement of permanence and transformation while being aware of that one’s efforts might fall short of both.
#Irvine study; #mediaviolenece; #mediaexposure; #ScienceAdvances
New York, Apr 23 (Canadian-Media): Repeated exposure to media coverage of collective traumas, such as mass shootings or natural disasters, can fuel a cycle of distress, according to a University of California, Irvine study.
Researchers found that individuals can become more emotionally responsive to news reports of subsequent incidents, resulting in heightened anxiety and worry about future occurrences.
The report appears in Science Advances, a peer-reviewed, multidisciplinary, open-access journal published by the American Association for the Advancement of Science.
“It’s natural for people to experience feelings of concern and uncertainty when a terrorist attack or a devastating hurricane occurs,” said senior author Roxane Cohen Silver, UC Irvine professor of psychological science. “Media coverage of these events, fueled by the 24-hour news cycle and proliferation of mobile technologies, is often repetitious and can contain graphic images, video and sensationalized stories, extending the impact to populations beyond those directly involved.”
Earlier research has shown that consumption of media coverage of a collective trauma is a rational response for individuals seeking information as a way to mitigate their apprehension and cope with their stress. However, this strategy may backfire. According to this new study, repeated exposure to explicit content may exacerbate fear about future adversities, which promotes future media consumption and greater anxiety when they do occur. There is an even greater risk of falling into this pattern for those who have experienced violence in their lives or have been diagnosed with mental health ailments.
“The cycle of media exposure and distress appears to have downstream implications for public health as well,” said Rebecca R. Thompson, a UC Irvine postdoctoral scholar in psychological science and lead author of the report. “Repeated exposure to news coverage of collective traumas has been linked to poor mental health consequences — such as flashbacks — in the immediate aftermath and posttraumatic stress responses and physical health problems over time, even among individuals who did not directly experience the event.”
A national longitudinal study of more than 4,000 U.S. residents was conducted by Thompson, Silver and their colleagues over a three-year period following the 2013 Boston Marathon bombings and the 2016 massacre at the Pulse nightclub in Orlando, Florida. Participants were surveyed four times, enabling the team to capture responses to both tragedies and examine how responses to the first incident affected reactions to news coverage of the second.
“Our findings suggest that media organizations should seek to balance the sensationalistic aspects of their coverage, such as providing more informational accounts as opposed to lengthy descriptions of carnage, as they work to inform the public about breaking news events,” Silver said. “This may lessen the impact of exposure to one event, reducing the likelihood of increased worry and media-seeking behavior for subsequent events.”
Also conducting the study were Nickolas M. Jones, former UC Irvine psychological science doctoral student, and E. Alison Holman, UC Irvine associate professor of nursing. Project funding was provided by National Science foundation grants BCS-1342637, BCS-385 1451812 and BCS-1650792.
#MIT; #AI; #MITTechnologyReview; #KellgrenLawrenceGrade; #NIH
New York/Canadian-Media: A new study shows how training deep-learning models on patient outcomes could help reveal gaps in existing medical knowledge, Karen Hao, the senior AI reporter at MIT Technology Review reported.
Image: Measuring pain scale. Image credit: MIT Technology
In the last few years, research has shown that deep learning can match expert-level performance in medical imaging tasks like early cancer detection and eye disease diagnosis. But there’s also cause for caution. Other research has shown that deep learning has a tendency to perpetuate discrimination. With a health-care system already riddled with disparities, sloppy applications of deep learning could make that worse.
Now a new paper published in Nature Medicine is proposing a way to develop medical algorithms that might help reverse, rather than exacerbate, existing inequality. The key, says Ziad Obermeyer, an associate professor at UC Berkeley who oversaw the research, is to stop training algorithms to match human expert performance.
The paper looks at a specific clinical example of the disparities that exist in the treatment of knee osteoarthritis, an ailment which causes chronic pain. Assessing the severity of that pain helps doctors prescribe the right treatment, including physical therapy, medication, or surgery. This is traditionally done by a radiologist reviewing an x-ray of the knee and scoring the patient’s pain on the Kellgren–Lawrence grade (KLG), which calculates pain levels based on the presence of different radiographic features, like the degree of missing cartilage or structural damage.
But data collected by the National Institute of Health found that doctors using this method systematically score Black patients’ pain as far as far less severe than what they say they’re experiencing. Patients self-report their pain levels using a survey that asks how much it hurts to do various things, such as fully straightening their knee. But these self-reported pain levels are ignored in favor of the radiologist’s KLG score when prescribing treatment. In other words, Black patients who show the same amount of missing cartilage as white patients self-report higher levels of pain.
This has consistently miffed medical experts. One hypothesis is that Black patients could be reporting higher levels of pain in order to get doctors to treat them more seriously. But there’s an alternative explanation. The KLG methodology itself could be biased. It was developed several decades ago with white British populations. Some medical experts argue that the list of radiographic markers it tells clinicians to look for may not include all the possible physical sources of pain within a more diverse population. Put another way, there may be radiographic indicators of pain that appear more commonly in Black people that simply aren’t part of the KLG rubric.
#ScienceAndTechnology; #AI; #PVersusNPQuestion; #ComputerScience;
New York/Canadian-Media: Computers are good at answering questions. What's the shortest route from my house to Area 51? Is 8,675,309 a prime number? How many teaspoons in a tablespoon? For questions like these, they've got you covered.
This collection of dots and lines is the shortest traveling salesperson problem tour that passes through 1,000 points. Image Credit: William Cook et al., CC BY-ND
There are certain innocent-sounding questions, though, that computer scientists believe computers will never be able to answer—at least not within our lifetimes. These problems are the subject of the P versus NP question, which asks whether problems whose solutions can be checked quickly can also be solved quickly. P versus NP is such a fundamental question that either designing a fast algorithm for one of these hard problems or proving you can't would net you a cool million dollars in prize money.
My favorite hard problem is the traveling salesperson problem. Given a collection of cities, it asks: What is the most efficient route that visits all of them and returns to the starting city? To come up with practical answers in the real world, computer scientists use approximation algorithms, methods that don't solve these problems exactly but get close enough to be helpful. Until now, the best of these algorithms, developed in 1976, guaranteed that its answers would be no worse than 50% off from the best answer.
I work on approximation algorithms as a computer scientist. My collaborators Anna Karlin and Shayan Oveis Gharan and I have found a way to beat that 50% mark, though just barely. We were able to prove that a specific approximation algorithm puts a crack in this long-standing barrier, a finding that opens the way for more substantial improvements.
This is important for more than just planning routes. Any of these hard problems can be encoded in the traveling salesperson problem, and vice versa: Solve one and you've solved them all. You might say that these hard problems are all the same computational gremlin wearing different hats.
The best route is hard to find
The problem is usually stated as "find the shortest route." However, the most efficient solution can be based on a variety of quantities in the real world, such as time and cost, as well as distance.
To get a sense of why this problem is difficult, imagine the following situation: Someone gives you a list of 100 cities and the cost of plane, train and bus tickets between each pair of them. Do you think you could figure out the cheapest itinerary that visits them all?
Consider the sheer number of possible routes. If you have 100 cities you want to visit, the number of possible orders in which to visit them is 100 factorial, meaning 100 × 99 × 98 x … ×
1. This is larger than the number of atoms in the universe.
Going with good enough
Unfortunately, the fact that these problems are difficult does not stop them from coming up in the real world. Besides finding routes for traveling salespeople (or, these days, delivery trucks), the traveling salesperson problem has applications in many areas, from mapping genomes to designing circuit boards.
To solve real-world instances of this problem, practitioners do what humans have always done: Get solutions that might not be optimal but are good enough. It's OK if a salesperson takes a route that's a few miles longer than it has to be. No one cares too much if a circuit board takes a fraction of a second longer to manufacture or an Uber takes a few minutes longer to carry its passengers home.
Computer scientists have embraced "good enough" and for the past 50 years or so have been working on so-called approximation algorithms. These are procedures that run quickly and produce solutions that might not be optimal but are probably close to the best possible solution.
The long-reigning champ of approximation
One of the first and most famous approximation algorithms is for the traveling salesperson problem and is known as the Christofides-Serdyukov algorithm. It was designed in the 1970s by Nicos Christofides and, independently, by a Soviet mathematician named Anatoliy Serdyukov whose work was not widely known until recently.
The Christofides-Serdyukov algorithm is quite simple, at least as algorithms go. You can think of a traveling salesperson problem as a network in which each city is a node and each path between pairs of cities is an edge. Each edge is assigned a cost, for example the traveling time between the two cities. The algorithm first selects the cheapest set of edges that connect all the cities.
This, it turns out, is easy to do: You just keep adding the cheapest edge that connects a new city. However, this not a solution. After connecting all the cities, some might have an odd number of edges coming out of them, which doesn't make sense: Every time you enter a city with an edge, there should be a complementary edge you use to leave it. So the algorithm then adds the cheapest collection of edges that makes every city have an even number of edges and then uses this to produce a tour of the cities.
This algorithm runs quickly and always produces a solution that's at most 50% longer than the optimal one. So, if it produces a tour of 150 miles, it means that the best tour is no shorter than 100 miles.
Of course, there's no way to know exactly how close to optimal an approximation algorithm gets for a particular example without actually knowing the optimal solution—and once you know the optimal solution there's no need for the approximation algorithm! But it's possible to prove something about the worst-case scenario. For example, the Christofides-Serdyukov algorithm guarantees that it produces a tour that is at most 1.5 times the length of the shortest collection of edges connecting all the cities—and, therefore, at most 1.5 times the length of the optimal tour.
A really small improvement that's a big deal
Since the discovery of this algorithm in 1976, computer scientists had been unable to improve upon it at all. However, last summer my collaborators and I proved that a particular algorithm will, on average, produce a tour that is less than 49.99999% away from the optimal solution. I'm too ashamed to write out the the true number of 9s (there are a lot), but this nevertheless breaks the longstanding barrier of 50%.
The algorithm we analyzed is very similar to Christofides-Serdyukov. The only difference is that in the first step it picks a random collection of edges that connects all the cities and, on average, looks like a traveling salesperson problem tour. We use this randomness to show that we don't always get stuck where the previous algorithm did.
While our progress is small, we hope that other researchers will be inspired to take another look at this problem and make further progress. Often in our field, thresholds like 50% stand for a long time, and after the first blow they fall more quickly. One of our big hopes is that the understanding we gained about the traveling salesperson problem while proving this result will help spur progress.
Getting closer to perfect
There is another reason to be optimistic that we will see more progress within the next few years: We think the algorithm we analyzed, which was devised in 2010, may be much better than we were able to prove. Unlike Christofides' algorithm, which can be shown to have a hard limit of 50%, we suspect this algorithm may be as good as 33%.
Indeed, experimental results that compare the approximation algorithm to known optimal solutions suggest that the algorithm is quite good in practice, often returning a tour within a few percent of optimal.
#Google; #Maps; #Photos; #Globe
New York/Canadian-Media: The COVID-19 pandemic has brought about many changes, including business closures and updated hours for restaurants and stores.
CC0 Public Domain. Image credit: Unsplash
It has been ensured by Google has ensured that necessary changes to over 200 million places on their Maps application can be done any users with a Google account. Local Guides, a community of 150 million users around the globe who contribute to Maps updates are a newer development within Maps.
Maps experience has further been made easier for the users by Google's creation of a way for users to learn more about place options. This can be done by exploring an assortment of photos, reviews and updates about locations from all over the world.
For example, Android phone users can now use the Contribute tab in Maps to join the "Local Love Challenge" and write ratings and reviews as well as place location confirmations. The current goal stands at 100,000 recorded businesses. The Maps team plans to use the Local Love Challenge toward updating data on locations for countries covered in the future.
Another feature that would be added by the Maps team in the coming weeks is photo updates to share recent photos of visited places, which will facilitate users insight into not only a business's appearance and location but also the street view and various traffic conditions nearby.
For participation in photo updates, users can navigate to the Updates tab when viewing a place and select "upload a photo update". All users may upload as many photos as they wish as well as view any photos left by others in the Updates section.
In addition, routes within the Maps app can be revised by selecting the "Edit the map" feature and reporting a "Missing Road". Users of the can also add missing roads by drawing lines, change road directions, rename roads and delete or realign incorrectly named roads, inform the Maps team about road closures along with details such as dates and reasons.
In order to ensure the accuracy of user-provided data prior to publication, the team at Google will assess all updates made by the user.
While the updates feature is already available in over 80 countries, the new location photo features will become available over the coming months.
#Chemists; #Alchemists; #AfricanAmericanChemists
Washington/Canadian-Media: Being a vital part of society for hundreds of years, chemists with alchemists coming before them have contributed to unraveling the curiosity of the people about the elements and their fascinating properties to the understanding and betterment of our world.
Chemistry laboratory at Tuskegee Institute, ca. 1902. Library of Congress Prints & Photographs Division. //www.loc.gov/item/2014646471/
This article by the Library of Congress (LoC) highlights African American chemists Alice Ball, Norbert Rillieux, Marie Maynard Daly, and Percy Julius.
Growing up in Seattle, Alice Ball (1892-1916) earned two bachelor’s degrees from the University of Washington, one for pharmaceutical chemistry and one for pharmacy. After relocating to Hawaii, she became the first African American and woman to earn a master’s degree in chemistry at the College of Hawaii (known today as the University of Hawaii) and became the first female chemistry instructor at the University at the age of 23. She was also responsible for creating an injectable cure for leprosy patients by isolating the ethyl esters from the oil of Hydnocarpus wightianus, or chaulmoogra tree, seeds. Her work led to a treatment that was used until the 1940s and saved thousands of lives.
Social Hall for the Kalaupapa leper colony. Library of Congress Prints & Photographs Division. //www.loc.gov/pictures/item/hi0098.color.361561c/
Born in New Orleans and considered to be one of the earliest chemical engineers, Norbert Rillieux (1806-1894), became an instructor of applied mechanics at L’École Centrale des Arts et Manufactures, now part of Université Paris-Saclay, France.
Rillieux began researching a more efficient sugar refining process and moved back to Louisiana at the prospect of being the head engineer at a new sugar refinery and completed his research and was granted Patent No. US4879 in 1846, which explained his “new and useful Improvements in the Method of Heating, Evaporating, and Cooling Liquids, especially intended for the manufacture of sugar.” This innovation allowed for more efficient production and the use of less fuel. Fun fact: Rillieux is a cousin of Edgar Degas, the French impressionist painter.
Marie Maynard Daly (1921-2003), born and raised in the borough of Queens, New York, earned a bachelor’s degree in chemistry from Queens College and her master’s in chemistry from New York University. Based on her dissertation at Columbia University, “A Study of the Products Formed by the Action of Pancreatic Amylase on Corn Starch,” she became the first African American woman in the United States to earn a Ph.D. in chemistry.
Daly taught chemistry at Howard University, performed research on the metabolism of nucleic components at the Rockefeller Institute, and taught biochemistry at the College of Physicians and Surgeons of Columbia University, ultimately becoming a professor at the Albert Einstein College of Medicine. She was a prolific author on wide-ranging subjects and was published in highly regarded journals like the Journal of General Physiology, the Journal of Experimental Medicine, and the Journal of Clinical Investigation. These accomplishments are incredible, and even more so, she instituted a scholarship program in her parents’ name at Queens College for minority students eager to study science.
Well-known for his landmark synthesis of physostigmine, a compound that to this day is used in the treatment of glaucoma, Percy Julian (1899-1975) and his findings co-authored by Josef Pikl appeared in the Journal of the American Chemical Society v.57, no. 4.
Julian made enormous contributions to the field of medicinal chemistry and millions of people have benefitted from the research and brought him over 100 patents, including one for margarine!
Doing incredible work against just as incredible odds, these four Americans contributed not only to this country but the world. The following list of Internet and print resources is a good place to start learning more about them and their discoveries, as well as other amazing African American chemists.
#PlanetInnnerWorking; #Interpretation; # AthanasiusKircher; #Maps; #LibraryOfCongress
Washington/Canadian-Media: Athanasius Kircher, a scholar, scientist, and Jesuit priest based his theories about the world beneath his feet, and created a series of three maps from 1668 as part of his book, 'Mundus subterraneus' (Subterranean World) that show an impressive interpretation of the planet’s inner workings, Library of Congress (LoC) reports said.
These maps are housed in the LOC's Rare Book Division.
Scientists and storytellers have often wondered about the happenings under the surface of the Earth, and have come up with imaginative subterranean worlds.
Kircher thought that the subterranean world could explain the volcanic activity and the movements of the tides.
A complex system by which fire travels from the Earth’s core to its surface, breaking through via the eruptions of volcanoes (or montes Vulcanii, mountains of Vulcan, the Roman god of metalworking and fire) is explained by the first map, Systema Ideale Pyrophylaciorum.
Athanasius Kircher. Systema ideale pyrophylaciorum. 1668. Rare Book Division of LoC.
Shown on the map is a large central fire (ignis centralis) labeled A, with canals labeled C, and smaller lakes (aestuaria) of fire, labeled B.
The presence of lakes and rivers are similar to those found on the Earth’s surface with the difference that these canals are made of fire.
Clearly discernable on the map are paths leading from the central flame to volcanic eruptions around the world, with the smoke emerging from the volcanoes matching the swirling clouds surrounding the globe forming illusive images of the smoke and ash that accompany volcanic eruptions.
Kircher admits that there are gaps in his theory, though current science, he adds that the notion of a fiery Earth’s core is not entirely incorrect evidenced by the U.S. Energy Information Administration reports that the temperature of the inner core is as hot as the surface of the sun.
However, according to Kircher, besides fire or pyrophylacia, (fire-houses) traveling through the underground, there were also hydrophylacia, or “water-houses,” that interacted with the ignis centralis and also moved via canals and lakes.
Athanasius Kircher. Detail from System ideale qvo exprimitur, aquarum, showing two whirlpools on the ocean’s surface and the “mouths” at the base of two mountains. 1668. Rare Book Division of LoC.
With Kircher’s belief that the Earth’s interior is one of movement, he attributed the formation of tides to this interaction of water and fire under the surface of the earth could be destructive, too, causing whirlpools.
Water was pushed up through the surface at the base of mountains, the mouths of which can be seen on the map.
Athanasius Kircher. Detail from System ideale qvo exprimitur, aquarum, showing two whirlpools on the ocean’s surface and the “mouths” at the base of two mountains. 1668. Rare Book Division of LoC.
In addition to presenting a lively view of the subterranean, Kircher also made a map showing the effects of these underground systems of water and fire on the surface, with volcanoes (montes vulcanios) and whirlpools (abyssos) labeled, seen below.
Included in the map are common features of other early maps of territories that had newly been discovered by European colonial powers: California appears as a peninsula, and Australia is connected to Antarctica.
Australia/Antarctica is labeled on the map as “incognita,” or unknown as this region is devoid of volcanoes, which appear on every continent.
Also found in narrow passages and around the capes of continents, are whirlpools that underscore the many dangers of exploration.
Athanasius Kircher. Tabula geographico-hydrographica motus oceani,currents, abyssos, montes ignivomos. 1668. Rare Book Division of LoC.
Besides his geological theories, Kircher is also known for his other accomplishments including mapping the mythic island of Atlantis and pioneering studies in Egyptology.
Although his view of the underground is not believed today, these maps offer striking examples of how maps can be used for scientific purposes, both under the Earth and beyond it.
#Washington; #LibraryOfCongress; #Georeferencing; #rasterData; #SpatialReferenceInfo
Washington/Canadian-Media: The technology in the process of Georeferencing or adding digital spatial reference information to an otherwise non-spatial image is explained by Meagan Snow, Geospatial Data Visualization Librarian in the Geography and Map Division of the Library of Congress (LoC), LoC reports said.
Library of Congress. Image credit: Twitter handle
Addition of spatial reference information to a scanned map image facilitates the alignment of map image correctly with the geographic features it was built to represent.
This enables a user to layer any other spatial data file alongside (or on top of) their map image.
Snow makes use of the following 1967 map of the US Capitol grounds as an example.
Map showing properties under the jurisdiction of the Architect of the Capitol, 1967. Geography & Map Division, Library of Congress. Image credit: LoC
This map shows properties under the jurisdiction of the Architect of the Capitol in 1967. The Madison Building of the Library of Congress, home to the Geography & Map Division is missing in this map. The comparison of this 1967 map to today’s Capitol Hill Complex reveals how the area has changed over time.
Maps that are scanned as image files, explains Snow, and meets the criteria for what is called raster data: data composed of a continuous grid of cells (or pixels).
The fact that spatial data can commonly be stored in a raster format enables scanned map images to be loaded directly into GIS software without any file conversions needed.
It is the presence of “spatial reference information” in GIS software enables the Geographic data layers to align correctly when viewed through the software allows a user to manually add control points between the non-spatial scanned map image and a pre-existing GIS data layer that already has spatial reference information and displays correctly in GIS software.
Georeferencing tools (available in all of the most widely used GIS software options) provided by the software package they are using allows a user to place a control point by selecting a specific point on the scanned map image, and then selecting the exact same point on the GIS layer. Once the user adds a couple of control points, the scanned map image will begin to align with the existing data layer scale.
Beginning of the georeferencing process: two control points have been placed between the scanned map image and the current aerial imagery, bringing the scanned map image to the correct scale but not the correct placement.. Image credit: LoC
But for the rest of the map to be aligned, a user must continue to add control points, making sure they are well-distributed across the map image, until it is determined that the two layers are aligned properly. Here’s what the map looks like after 21 control points have been placed.
Completion of the georeferencing process: a number of well-distributed control points have been placed, bringing the scanned map image to both the correct scale and correct geographic placement. Image credit: LoC
The georeferencing can be saved after the completion of the process enabling the scanned map image to go to the right place in the world whenever it is loaded into GIS software.
In the lower right-hand corner of the map we can now see the current aerial footprint of the Madison Building where it was missing.
The 1967 map of the Capitol grounds is layered against current aerial imagery, showing 50 years of changes to the Capitol Complex, including the construction of the Library of Congress’ Madison Building. Image credit: LoC
A georeferencing a scanned image has many uses, primary reason being it allows a map user to view the map in geographic context with any number of other spatial data sources and further enables the user compare maps created at different scales or in different times periods.
The users are also facilitated to see an older map juxtaposed against current aerial imagery or spatial data as well as to use the scanned map image as a basis for the creation of new spatial datasets.
#Canada; #Science&TechnologyMuseum; #ArtifactAlley, #AugmentedAlleyApp, #ISpyGame
Ottawa/Canadian-Media: Situated in 1867 St. Laurent Blvd., Ottawa, Ontario, Canada Science and Technology Museum (CSTM) originally opened in 1967, and then closed its doors to the public for three years after an $80.5-million was invested for the renewal of its entire building, which reopened to the public in ovember 17, 2017 with a fresh chapter, CSTM reports said.
Canada Science and Technology Museum. Image credit: Ingenium website
The completely redesigned space of the new museum encompasses more than 7,400 square meters (80,000 sq. ft.), including a temporary exhibition hall to accommodate travelling exhibitions from around the world.
Besides locomotives and the Crazy Kitchen, the visitors were attracted towards more artifacts and interactives of the new museum which tells Canada’s innovation story in an immersive, educational, and fun way.
Visitors are invited through curiosity, observation, and creativity to discover, play, and experience how people have made Canada and continue to shape its future. The museum’s 11 exhibitions, the demonstration stage, or tinker in Exploratek, people will realize that they will become a part of Canada’s story of science, technology, and innovation.
CSTM is open to the public from 10 a.m. - 5 p.m. from Wed through Saturday and houses both Temporary Exhibitions and Permanent Exhibition.
In this story I would be talking about the Permanent Exhibition.
Permanent Exhibition's chief features are: Artifact Alley, Augmented Alley App, I-Spy Game,
Artifact Alley is the dazzling centre hall of the Canada Science and Technology Museum. Encompassing eight distinctly-themed cases and the Demo Stage, Artifact Alley is the museum’s backbone. More than 700 artifacts are on display – arranged as stand-alone pieces or in artful groups. Visitors will experience an immersive winter scene, take the wheel of a ship, and see how science and technology figure into our daily lives. Get hands-on with real woodworking tools, discover old technologies that can now be found as apps on a smartphone, take command of a sci-fi spacecraft, and more!
Augmented Alley App which can be downloaded free before you explore the museum and facilitates you to scan the artifacts as you visit the museum.
Iconic Canadian treasures give a glimpse of how they would have worked at the time. Beautiful animations and stunning augmented reality brings life to these artifacts and let you experience vintage footage, hear retro music, and be transported to the recent past!
Augmented Alley is a window into one of Canada’s most extensive historical collections and lets you relive old memories … or make new ones, sharing the novelty of a rotary phone, a record player, or a typewriter.
The real artifacts on display can be compared to detailed, true-to-life illustrations in-app that bring to light surprising details and stories of the featured artifacts
Main Features are 13 unique artifact experiences to unlock, Exciting mix of AR experiences and interactive animations, 3D SLAM-based markerless object recognition, Fun factsheets with pop-up features and artifact details, and True-to-life artifact renderings
#Ottawa; #COVIDAlertGlitch; #ExposureNotification; #GooglePlayStore; #AppleAppStore
Ottawa/Canadian-Media: Canada's COVID Alert app glitch, that left some Canadian users without exposure notifications for much of November, was fixed last week by the developers, media reports said.
Covid Alert app. Image credit: Twitter handle
A glitch in the COVID Alert prevented the app from checking for potential exposure to the coronavirus on certain devices for at least two weeks in November.
An update to the app released on Nov. 23 said it would fix a "bug causing gaps in exposure checks for some users."
The uncertainty of the number of people who missed exposure notifications due to the glitch, raises the prospect that certain people were not advised to self-isolate or seek a COVID-19 test in a timely manner, presumably delaying diagnosis.
This episode highlights how the app could provide users with a false sense of security, said Kelly Bronson, a Canada Research Chair in science and society and serves on the Global Pandemic App Watch program at the University of Ottawa, which tracks the uptake of similar tools around the world, pointed out "automation bias" -- human tendency to rely on automated decision-making, which can reduce personal vigilance and warned that the apps
"are not a panacea."
"I think it's really important that people know the limitations of these technologies," she said, CBC News reported.
The process of receipt by a smartphone codes from a central server of exposure to Covid 19 is supposed to take place several times a day.
Several users said on the social media that their devices did not show any exposure checks from Nov. 9 to 23 .
The problem was presumably first been reported by commenters in the Google Play Store as early as Nov. 12 "I noticed today that COVID Alert has done no exposure checks for the last two weeks," a user wrote in Apple's App Store on Nov. 20. "What good is this?," reported by CBC News.
Users of Android devices are requested to check their the Google Play App Store, and users of iPhones should check with their Apple's App Store to ensure their Covid19 Alert app is updated.