Archive for the ‘Video’ Category

Audio/Video Conferencing – Apache OpenMeetings

Wednesday, September 7th, 2016

Apache OpenMeetings

Ignorance of Apache OpenMeetings is the only explanation I can offer for non-Apache Openmeetings webinars with one presenter, listeners and a chat channel.

Proprietary solutions limit your audience’s choice of platforms, while offering no, repeat no advantages over Apache OpenMeetings.

It may be that your IT department is too busy creating SQLi weaknesses to install and configure Apache OpenMeetings, but even so that’s a fairly poor excuse for not using it.

If you just have to spend money to “trust” software, there are commercial services that offer hosting and other services for Apache OpenMeetings.

Apologies, sort of, for the Wednesday rant, but I tire of limited but “popular logo” commercial services used in place of robust open source solutions.

Secret Cameras Recording Baltimore’s…. [Watching the Watchers?])

Wednesday, August 24th, 2016

Secret Cameras Recording Baltimore’s Every Move From Above by Monte Reel.

Unknown to the citizens of Baltimore, they have been under privately funded, plane-based video surveillance since the beginning of 2016.

The pitch to the city:

“Imagine Google Earth with TiVo capability.”

You need to read Monte’s article in full and there are names you will recognize if you watch PBS:

Last year the public radio program Radiolab featured Persistent Surveillance in a segment about the tricky balance between security and privacy. Shortly after that, McNutt got an e-mail on behalf of Texas-based philanthropists Laura and John Arnold. John is a former Enron trader whose hedge fund, Centaurus Advisors, made billions before he retired in 2012. Since then, the Arnolds have funded a variety of hot-button causes, including advocating for public pension rollbacks and charter schools. The Arnolds told McNutt that if he could find a city that would allow the company to fly for several months, they would donate the money to keep the plane in the air. McNutt had met the lieutenant in charge of Baltimore’s ground-based camera system on the trade-show circuit, and they’d become friendly. “We settled in on Baltimore because it was ready, it was willing, and it was just post-Freddie Gray,” McNutt says. The Arnolds donated the money to the Baltimore Community Foundation, a nonprofit that administers donations to a wide range of local civic causes.

I find the mention of Freddie Gray ironic, considering how truthful and forthcoming the city and its police officers were in that case.

If footage exists for some future Freddie Gray-like case, you can rest assured the relevant camera failed, the daily data output failed, a Rose Mary Wood erasure accident happened, etc.

From Monte’s report, we aren’t at facial recognition, yet, assuming his sources were being truthful. But we all know that’s coming, if not already present.

Many will call for regulation of this latest intrusion into your privacy, but regulation depends upon truthful data upon which to judge compliance. The routine absence of truthful data about police activities, both digital and non-digital, makes regulation difficult to say the least.

In the absence of truthful police data, it is incumbent upon citizens to fill that gap, both for effective regulation of police surveillance and for the regulation of police conduct.

The need for an ad-hoc citizen-based surveillance system is clear.

What isn’t clear is how such a system would evolve?

Perhaps a server that stitches together cellphone video based on GPS coordinates and orientation? From multiple cellphones? Everyone can contribute X seconds of video from any given location?

Would not be seamless but if we all target known police officers and public officials…, who knows how complete a record could be developed?

Crowdsourced-Citizen-Surveillance anyone?

Torturing Iraqi Prisoners – Roles for Heroes like Warrant Officer Hugh Thompson?

Monday, August 1st, 2016

Kaveh Waddell pens a troubling story in A Video Game That Lets You Torture Iraqi Prisoners, which reads in part:

What if there were a way to make sense of state-sanctioned torture in a more visceral way than by reading a news article or watching a documentary? Two years ago, that’s exactly what a team of Pittsburgh-based video-game designers set out to create: an experience that would bring people uncomfortably close to the abuses that took place in one particularly infamous prison camp.

In the game, which is still in development, players assume the role of an American service member stationed at Camp Bucca, a detention center that was located near the port city of Umm Qasr in southeast Iraq, at an undetermined time during the Iraq War. Throughout the game, players interact with Iraqi prisoners, who are clothed in the camp’s trademark yellow jumpsuits and occasionally have black hoods pulled over their heads. The player must interrogate the prisoners, choosing between methods like waterboarding or electrocution to extract information. If an interrogation goes too far, the questioner can kill the prisoner.

Players also have to move captives around the prison camp, arranging them in cell blocks throughout the area. Camp Bucca is best known for incubating the group of fighters who would go on to create ISIS: The group’s leader, Abu Bakr al-Baghdadi, was held there for five years, where he likely forged many of the connections that make up the group’s network today. The developers say they chose to have the player wrestle with cell assignments to underscore the role of American prison camps in radicalizing the next generation of fighters and terrorists.

The developers relied on allegations of prisoner abuse in archived news articles and a leaked Red Cross report to guide their game design. While there were many reports of prisoner abuse at Camp Bucca, they were never so widespread as to prompt an official public investigation.

I find the hope that the game will convey:

“the firsthand revulsion of being in the position of torturer.”

unrealistic in light of the literature on Stanley Milgram’s electric-shock studies.

In the early 1960’s Milgram conducted a psychology experiment where test subjects (who were actors and not harmed) could be shocked by student volunteers, under the supervision of an experimenter. The shocks went all the way to 450 volts and a full 65% of the volunteers when all the way to 450 with the test subject screaming in pain.

Needless to say, the literature on that experiment has spanned decades, including re-enactments, some of which includes:

Rethinking One of Psychology’s Most Infamous Experiments by Cari Romm.

The Game of Death: France’s Shocking TV Experiment by Bruce Crumley.

Original materials:

Obedience to Authority in the Archive

From the webpage:

Stanley Milgram, whose papers are held in Manuscripts and Archives, conducted the Obedience to Authority experiments while he was an assistant professor at Yale University from 1961 to 1963. Milgram found that most ordinary people obeyed instructions to give what they believed to be potentially fatal shocks to innocent victims when told to do so by an authority figure. His 1963 article[i] on the initial findings and a subsequent book, Obedience to Authority and Experimental View (1974), and film, Obedience (1969), catapulted Milgram to celebrity status and made his findings and the experiments themselves the focus of intense ethical debates.[ii] Fifty years later the debates continues.

The Yale University Library acquired the Stanley Milgram Papers from Alexandra Milgram, his widow, in July 1985, less than a year after Milgram’s death. Requests for access started coming in soon after. The collection remained closed to research for several years until processed by archivist Diane Kaplan. In addition to the correspondence, writings, subject files, and teaching files often found in the papers of academics, the collection also contains the data files for Milgram’s experiments, including administrative records, notebooks, files on experimental subjects, and audio recordings of experimental sessions, debriefing sessions, and post-experiment interviews.

The only redeeming aspect of the experiment and real life situations like My Lai, is that not everyone is willing to tolerate or commit outrageous acts.

Hopeful the game will include roles for people like Warrant Officer Hugh Thompson who ended the massacre at My Lai by interposing his helicopter between American troops and retreating villagers and turned his weapons on the American troops.

Would you pull your weapon on a fellow member of the service to stop torturing of an Iraqi prisoner?

Would you use your weapon on a fellow member of the service to stop torturing of an Iraqi prisoner?

Would you?

Survey says: At least 65% of you would not.

Audiogram (New York Pubic Radio)

Monday, August 1st, 2016

Audiogram from New York Public Radio.

My interest in Audiogram was sparked by the need to convert an audio file into video, so the captioning service at YouTube would provide a rough cut at transcribing the audio.

From the post:

Audiogram is a library for generating shareable videos from audio clips.

Here are some examples of the audiograms it creates:

Why does this exist?

Unlike audio, video is a first-class citizen of social media. It’s easy to embed, share, autoplay, or play in a feed, and the major services are likely to improve their video experiences further over time.

Our solution to this problem at WNYC was this library. Given a piece of audio we want to share on social media, we can generate a video with that audio and some basic accompanying visuals: a waveform of the audio, a theme for the show it comes from, and a caption.

For more on the backstory behind audiograms, read this post.

I hope to finish the transcript I obtained from YouTube later this week and will be posted it, along with all the steps I took to produce it.

Hiding either the process and/or result would be poor repayment to all those who have shared so much, like New York Public Radio.

Face2Face – Facial Mimicry In Real-Time Video

Sunday, March 20th, 2016

Is a video enough for you to attribute quotes to a public figure?

After reading This system instantly edits videos to make it look like you’re saying something you’re not by Greg Kumparak, you may not be so sure.

From the post:

The video up top shows a work-in-progress system called Face2Face (research paper here) being built by researchers at Stanford, the Max Planck Institute and the University of Erlangen-Nuremberg.

The short version: take a YouTube video of someone speaking like, say, George W. Bush. Use a standard RGB webcam to capture a video of someone else emoting and saying something entirely different. Throw both videos into the Face2Face system and, bam, you’ve now got a relatively believable video of George W. Bush’s face — now almost entirely synthesized — doing whatever the actor in the second video wanted the target’s face to do. It even tries to work out what the interior of their mouth should look like as they’re speaking.

Face2Face: Real-time Face Capture and Reenactment of RGB Videos by Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt, Matthias Nießner, offers the following abstract:

We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination. We demonstrate our method in a live setup, where Youtube videos are reenacted in real time.

The video is most impressive:

If you want to dig deeper, consider from 2015: Real-time Expression Transfer for Facial Reenactment (PDF paper), by Justus Thies, Michael Zollhöfer, Matthias Nießner, Levi Valgaerts, Marc Stamminger, Christian Theobalt.

With its separately impressive video:

The facial mimicry isn’t perfect by any means but it is remarkably good.

Not a prediction but full body mimicry in 5 years would not surprise me.

The surprise will be the first non-consenting subject of full body mimicry.

What would you want to see Donald (short-fingers) Trump doing with a pumpkin?

PS: Apologies, I wasn’t able to locate a PDF of the 2016 paper.

‘You Were There!’ Historical Evidence Of Participation

Saturday, February 13th, 2016

Free: British Pathé Puts Over 85,000 Historical Films on YouTube by Jonathan Crow.

From the post:

British Pathé was one of the leading producers of newsreels and documentaries during the 20th Century. This week, the company, now an archive, is turning over its entire collection — over 85,000 historical films – to YouTube.

The archive — which spans from 1896 to 1976 – is a goldmine of footage, containing movies of some of the most important moments of the last 100 years. It’s a treasure trove for film buffs, culture nerds and history mavens everywhere. In Pathé’s playlist “A Day That Shook the World,” which traces an Anglo-centric history of the 20th Century, you will find clips of the Wright Brothers’ first flight, the bombing of Hiroshima and Neil Armstrong’s walk on the moon, alongside footage of Queen Victoria’s funeral and Roger Bannister’s 4-minute mile. There’s, of course, footage of the dramatic Hindenburg crash and Lindbergh’s daring cross-Atlantic flight. And then you can see King Edward VIII abdicating the throne in 1936, Hitler’s first speech upon becoming the German Chancellor in 1933 and the eventual Pearl Harbor attack in December 1941 (above).

But the really intriguing part of the archive is seeing all the ephemera from the 20th Century, the stuff that really makes the past feel like a foreign country – the weird hairstyles, the way a city street looked, the breathtakingly casual sexism and racism. There’s a rush in seeing history come alive. Case in point, this documentary from 1967 about the wonders to be found in a surprisingly monochrome Virginia.

A treasure trove of over 85,000 historical films!

With modern face recognition technology, imagine mining these films and matching faces up against other photographic archives.

Rather than seeing George Wallace, for example, as a single nasty piece of work during the 1960’s, we may identify the followers of such “leaders.”

Those who would discriminate on the basis of race, gender, religion, sexual orientation, ethnic origin, language, etc. are empowered by those of similar views.

One use of this historical archive would be to “out” the followers of such bigots.

To protect “former” fascists supporters on the International Olympic Committee, the EU will protest any search engine that reports such results.

You should judge the IOC by their supporters as well. (Not the athletes, but the IOC.)

Pump Up The Noise! Real Time Video

Monday, May 18th, 2015

Why Meerkat and Periscope Are the Biggest Things Since, Well, Twitter by Ryan Holmes.

From the post:

Finally, there are the global and political implications. If every single person on earth with a phone is able to broadcast anything in real time, we’re going to see a democratization of sharing information in ways we’ve never seen before. Take for example the crucial role that Twitter played in the Egyptian revolution of 2011. In many cases, social media became a new type of lifeline for people on the ground to share accounts of what was happening with the world. Now, imagine a similar world event in which live updates from citizens are in real-time video. These types of updates will transport viewers to events and places in ways we have never seen before.

Live video streaming is valuable for some use cases but the thought of “…every single person on earth is able to broadcast anything in real time…” fills me with despair.

Seriously. Think about the bandwidth you lose from your real time circumstances to watch a partial view of someone else’s real time circumstance.

Every displaced person in every conflict around the world could broadcast a live feed of their plight, but how many of those can you fit into a day? (Assume you aren’t being tube fed and have some real time interaction in your own environment.)

Live video is imagining of a social context, a context that isn’t possible to display as part of a real time video. Every real time video feed has such a context, which require even more effort to acquire separate from the video feed.

As an example, take the “…the crucial role that Twitter played…” claim from the quote. Really? According to some accounts, The myth of the ‘social media revolution’, It’s Time to Debunk the Many Myths of the Egyptian Revolution, work on the issues and organization that resulted in the Arab Spring had been building for a decade, something the Twitter-centric pass over in silence.

Moreover, as of September 2011, Egypt had only 129,711 Twitter users, so as of the Arab Spring, it was even lower. Not to mention that the poor who provided the backbone of the revolution did not have Western style phones with Twitter accounts.

A tweeted revolution is one viewed through a 140 character lens with no social context.

Now imagine real time imagery of “riots by hooligans” or “revolts by the oppressed” or “historical reenactments.” Despite it high bandwidth, real time video can’t reliably provide you with the context necessary to distinguish any of those cases from the others. No doubt real time video can advocate for one case or the other, but that isn’t the same as giving you the facts necessary to reach your own conclusions.

Real time video is a market opportunity for editorial/summary services that mine live video and provide a synopsis of its content. Five thousand live video accounts about displaced persons suffering from cold temperatures and lack of food isn’t actionable. Knowing what is required and where to deliver it is.


Saturday, March 8th, 2014


From the “Features” page:

Performance analysis made easy

LongoMatch has been designed to be very easy to use, exposing the basic functionalities of video analysis in an intuitive interface. Tagging, playback and edition of stored events can be easily done from the main window, while more specific features can be accessed through menus when needed.

Flexible and customizable for all sports

LongoMatch can be used for any kind of sports, allowing to create custom templates with an unlimited number of tagging categories. It also supports defining custom subcategories and creating templates for your teams with detailed information of each player which is the perfect combination for a fine-grained performance analysis.

Post-match and real time analysis

LongoMatch can be used for post-match analysis supporting the most common video formats as well as for live analysis, capturing from Firewire, USB video capturers, IP cameras or without any capture device at all, decoupling the capture process from the analysis, but having it ready as soon as the recording is done. With live replay, without stopping the capture, you can review tagged events and export them while still analyzing the game live.

Although pitched as software for analyzing sports events, it occurs to me this could be useful in a number of contexts.

Such as analyzing news footage of police encounters with members of the public.

Or video footage of particular locations. Foot or vehicle traffic.

The possibilities are endless.

Then it’s just a question of tying that information together with data from other information feeds. 😉

[Disorderly] Video Lectures in Mathematics

Monday, December 2nd, 2013

[Disorderly] Video Lectures in Mathematics

Pinterest, home to a disorderly collection of video lectures on mathematics.

Not the fault of the lectures but only broad bucket organization is possible.

If you need a holiday project, organizing this collection would be a real value-add for the community.

The organization would have to be outside of Pinterest and pointing back to the lectures.

Distributed Multimedia Systems (Archives)

Tuesday, February 12th, 2013

Proceedings of the International Conference on Distributed Multimedia Systems

From the webpage:

DMS 2012 Proceedings August 9 to August 11, 2012 Eden Roc Renaissance Miami Beach, USA
DMS 2011 Proceedings August 18 to August 19, 2011 Convitto della Calza, Florence, Italy
DMS 2010 Proceedings October 14 to October 16, 2010 Hyatt Lodge at McDonald’s Campus, Oak Brook, Illinois, USA
DMS 2009 Proceedings September 10 to September 12, 2009 Hotel Sofitel, Redwood City, San Francisco Bay, USA
DMS 2008 Proceedings September 4 to September 6, 2008 Hyatt Harborside at Logan Int’l Airport, Boston, USA
DMS 2007 Proceedings September 6 to September 8, 2007 Hotel Sofitel, Redwood City, San Francisco Bay, USA

For coverage, see the Call for Papers, DMS 2013.

Another archive with topic map related papers!

DMS 2013

Tuesday, February 12th, 2013

DMS 2013: The 19th International Conference on Distributed Multimedia Systems


Paper submission due: April 29, 2013
Notification of acceptance: May 31, 2013
Camera-ready copy: June 15, 2013
Early conference registration due: June 15, 2013
Conference: August 8 – 10, 2013

From the call for papers:

With today’s proliferation of multimedia data (e.g., images, animations, video, and sound), comes the challenge of using such information to facilitate data analysis, modeling, presentation, interaction and programming, particularly for end-users who are domain experts, but not IT professionals. The main theme of the 19th International Conference on Distributed Multimedia Systems (DMS’2013) is multimedia inspired computing. The conference organizers seek contributions of high quality papers, panels or tutorials, addressing any novel aspect of computing (e.g., programming language or environment, data analysis, scientific visualization, etc.) that significantly benefits from the incorporation/integration of multimedia data (e.g., visual, audio, pen, voice, image, etc.), for presentation at the conference and publication in the proceedings. Both research and case study papers or demonstrations describing results in research area as well as industrial development cases and experiences are solicited. The use of prototypes and demonstration video for presentations is encouraged.


Topics of interest include, but are not limited to:

Distributed Multimedia Technology

  • media coding, acquisition and standards
  • QoS and Quality of Experience control
  • digital rights management and conditional access solutions
  • privacy and security issues
  • mobile devices and wireless networks
  • mobile intelligent applications
  • sensor networks, environment control and management

Distributed Multimedia Models and Systems

  • human-computer interaction
  • languages for distributed multimedia
  • multimedia software engineering issues
  • semantic computing and processing
  • media grid computing, cloud and virtualization
  • web services and multi-agent systems
  • multimedia databases and information systems
  • multimedia indexing and retrieval systems
  • multimedia and cross media authoring

Applications of Distributed Multimedia Systems

  • collaborative and social multimedia systems and solutions
  • humanities and cultural heritage applications, management and fruition
  • multimedia preservation
  • cultural heritage preservation, management and fruition
  • distance and lifelong learning
  • emergency and safety management
  • e-commerce and e-government applications
  • health care management and disability assistance
  • intelligent multimedia computing
  • internet multimedia computing
  • virtual, mixed and augmented reality
  • user profiling, reasoning and recommendations

The presence of information/data doesn’t mean topic maps return good ROI.

On the other hand, the presence of information/data does mean semantic impedance is present.

The question is what need you have to overcome semantic impedance and at what cost?

Marakana – Open Source Training

Monday, April 23rd, 2012

Marakana – Open Source Training

From the homepage:

Marakana’s raison d’être is to help people get better at what they do professionally. We accomplish this by organizing software training courses (both public and private) as well as publishing learning resources, sharing knowledge from industry leaders, providing a place to share useful tidbits and supporting the community. Our focus is open source software.

I found this while watching scikit-learn – Machine Learning in Python – Astronomy, which was broadcast on Marakana TechTV.

From the Marakana TechTV homepage:

Marakana TechTV is an initiative to provide the world with free educational content on cutting-edge open source topics. Check out our work.

We work with open source communities to cover tech events world wide, as well as industry experts to create high quality informational videos from Marakana’s studio in downtown San Francisco.

…and we do it all at no charge. As an open source training company, Marakana believes in helping people get better at what they do, and through Marakana TechTV we’re able to engage open source communites around the globe, promote our training services, and stay current on the latest and greatest in open source.

Useful content and possibly a place to post educations videos. Such as on topic maps?

Tiny New Zealand Company Brings Cool Microsoft Video Tech To The World

Tuesday, April 10th, 2012

Tiny New Zealand Company Brings Cool Microsoft Video Tech To The World

Whitney Grace writes:

New Zealand is known for its beautiful countryside and all the popular movies filmed there, sheep, and Dot Com. Business Insider reports there is another item to add to the island nation’s “list of reasons to be famous,” “Tiny New Zealand Company Brings Cool Microsoft Video Tech to the World.” The small startup GreenButton used search technology from Microsoft Research and created InCus, a service that transcribes audio and video files to make them searchable. It is aimed at corporation enterprises to make their digital media libraries searchable.

Where there is searching, there are subjects.

Take that as a given.

The startup: GreenButton.

Apparently speech transcription. No motion detection/analysis for indexing. That would be a lot tougher.

Interesting opportunity for an “add-on” to this service to use topic map to map to other resources.

One service invents the potential for another.

Video Search – Webmaster EDU

Saturday, February 25th, 2012

Video Search – Webmaster EDU

From the webpage:

In order to deliver search results, Google crawls the web and collects information about each piece of content. Often the best results are online videos and Google wants to help users find the most useful videos. Every day, millions of people find videos on Google search and we want them to be able to find your relevant video content.

Google is supporting for videos along with alternate ways to make sure Google can index your videos.

It is a fairly coarse start but beats no information about your videos at all.

Videos that can be easily found are more likely to be incorporated in topic maps (and other finding aids).

Closing the Knowledge Gap:.. (Lessons for TMs?)

Friday, December 30th, 2011

Closing the Knowledge Gap: A Case Study – How Cisco Unlocks Communications by Tony Frazier, Director of Product Management, Cisco Systems and David Fishman, Marketing, Lucid Imagination.

From the post:

Cisco Systems set out to build a system that takes the search for knowledge beyond documents into the content of social network inside the enterprise. The resulting Cisco Pulse platform was built to deliver corporate employees a better understanding who’s communicating with whom, how, and about what. Working with Lucid Imagination, Cisco turned to open source — specifically, Solr/Lucene technology — as the foundation of the search architecture.

Cisco’s approach to this project centered on vocabulary-based tagging and search. Every organization has the ability to define keywords for their personalized library. Cisco Pulse then tags a user’s activity, content and behavior in electronic communications to match the vocabulary, presenting valuable information that simplifies and accelerates knowledge sharing across an organization. Vocabulary-based tagging makes unlocking the relevant content of electronic communications safe and efficient.

You need to read the entire article but two things to note:

  • No uniform vocabulary: Every “organization” created its own.
  • Automatic tagging: Content was automatically tagged (read users did not tag)

The article doesn’t go into any real depth about the tagging but it is implied that who created the content and other information is getting “tagged” as well.

I read that to mean in a topic maps context that with the declaration of a vocabulary and automatic tagging, that another process could create associations with roles and role players and other topic map constructs without bothering end users about those tasks.

Not to mention that declaring equivalents between tags as part of the reading/discovery process might be limited to some but not all users.

An incremental or perhaps even evolving authoring of a topic map.

Rather than a dead-tree resource, delivered a fait accompli, a topic map can change as new information or new views of existing/new information are added to the map. (A topic map doesn’t have to be so useful. It can be the equivalent of a dead-tree resource if you really want.)

National Archives Digitization Tools Now on GitHub

Saturday, October 22nd, 2011

National Archives Digitization Tools Now on GitHub

From the post:

As part of our open government initiatives, the National Archives has begun to share applications developed in-house on GitHub, a social coding platform. GitHub is a service used by software developers to share and collaborate on software development projects and many open source development projects.

Over the last year and a half, our Digitization Services Branch has developed a number of software applications to facilitate digitization workflows. These applications have significantly increased our productivity and improved the accuracy and completeness of our digitization work.

We shared our experiences with these applications with colleagues at other institutions such as the Library of Congress and the Smithsonian Institution, and they expressed interest in trying these applications within their own digitization workflows. We have made two digitization applications, “File Analyzer and Metadata Harvester” and “Video Frame Analyzer” available on GitHub, and they are now available for use by other institutions and the public.

I suspect many government departments (U.S. and otherwise) have similar digitization workflow efforts underway. Perhaps greater publicity about these efforts will cause other departments to step forward.