Archive for the ‘Radio’ Category

Radio Show Host Manual

Saturday, December 17th, 2016

Host manual for the Software Engineering Radio

The manual if you want to do a show for Software Engineering Radio and quite possibly the manual for any radio show.

Why?

Consider the numbers (page 7, although engineers haven’t figured out pagination yet):

  • is in its 11th year with over 270 episodes;
  • published three times monthly by IEEE Software magazine’
  • is downloaded in aggregate 180,000 times or more per month (including current and back catalog), with each show reaching each show 30,000-40,000 within three months;
  • was named the #1 rated developer podcast based on an aggregation of hacker news comments;
  • appeared in in The Simple Programmer’s ultimate list of developer podcasts;
  • was included among 11 podcasts that will make you a better software engineer;
  • is highly rated on iTunes “Top Podcasts” under the category Software:How To;
  • features thought leaders in the field (Eric Evans, David Heinemeier Hansson, Kent Beck, The Gang of Four, Rich Hickey, Michael Nygard, James Turnbull, Michael Stonebraker, Adrian Cockroft, Martin Fowler, Martin Odersky, Eric Brewer,…);
  • a demographic survey we did a few years ago indicated that most of our listeners are software engineers with 5-10 years experience, architects, and technical managers.
  • Twenty-eight pages of information and suggestions.

    Instead of trolling internet censors and their suggestions, create high quality content. (Advice to myself as much as anyone else.)

    RTLSDR-Airband v2 released [Tooling Up for 2016, the Year of Surveillance]

    Tuesday, December 29th, 2015

    RTLSDR-Airband v2 released

    From the post:

    Back in June of 2014 we posted about the released of a new program called RTLSDR-Airband. RTLSDR-Airband is a Windows and Linux compatible command line tool that allows you to simultaneously monitor multiple AM channels per dongle within the same chunk of bandwidth. It is great for monitoring aircraft voice communications and can be used to feed websites like liveatc.net.

    Since our post the development of the software has been taken over by a new developer szpajder, who wrote in to us to let us know that he has now updated RTLSDR-Airband to version 2.0.0. The new versions improves performance and support for small embedded platforms such as the Raspberry Pi 2, but the Windows port is now not actively maintained and probably does not work.

    Depending on your surveillance needs, the RTLSDR-Airband v2 + hardware should be on your list.

    Governments around the world are continuing at a breakneck pace to eliminate privacy on large and small scale.

    Citizens must demonstrate to governments that fishbowl environments are more troubling to the governing than the governed.

    The data vacuum of the NSA can suck up the Internet backbone indefinitely. But, dedicated citizens can collect relevant data, untroubled by fraud, waste, inefficiency, and sheer incompetence.

    Think of it as the difference between carpet bombing square mile after square mile versus a single sniper round. The former is a military-industrial complex response, the latter is available to all players.

    As I have mentioned before, there are far more citizen-observers than government agents.

    Make their “see something, say something” mantra your own.

    See government activity, report government activity to other citizens.

    I have no idea what victory will look like versus a surveillance state. But being a passive goldfish is a sure recipe for defeat.

    Sora high performance software radio is now open source

    Saturday, July 25th, 2015

    Sora high performance software radio is now open source by Jane Ma.

    From the post:

    Microsoft researchers today announced that their high-performance software radio project is now open sourced through GitHub. The goal for Microsoft Research Software Radio (Sora) is to develop the most advanced software radio possible, capable of implementing the latest wireless communication technology easily and efficiently.

    "We believe that a fully open source Sora will better support the research community on more scientific innovation," said Kun Tan, a senior research on the software radio project team.

    Conventionally, the critical lower layer processing in wireless communication systems, i.e., the physical layer (PHY) and medium access control (MAC), are typically implemented in hardware (ASIC chips), due to high-computational and real-time requirements. However, designing ASIC is very costly and inflexible since ASIC chips are fixed. Once delivered, it cannot be changed or upgraded. The lack of flexibility and programmability makes experimental research in wireless communication very difficult. Software Radio (or SDR), on the contrary, proposes implementing all these low-level PHY and MAC processes through software, which is practical for development, debugging and updating. The challenge, however, is how the software can stay up to date with hardware in terms of performance.

    See also: Microsoft's Wireless and Networking research group

    Sora was developed to solve this significant challenge. Sora is a fully programmable high-performance software radio that is capable of implementing state-of-the-art wireless technologies (Wi-Fi, LTE, MIMO, etc.). Sora is based on software running on a low-cost, commodity multi-core PC with a general purpose OS, i.e., Windows. A multi-core PC, plugged in to a PCIe radio control board, connecting to a third-party radio front-end with antenna, becomes a powerful software radio platform. The PC interface board transfers the raw wireless (I/Q) signals between the RF front-end and the PC memory through fast DMA. All signals are processed in the software running in the PC.

    An avalanche of wireless signals will accompanying the Internet of Things (IoT). Intercepting all of them with custom hardware would be prohibitively expensive.

    Thanks to Microsoft, you can skip the custom hardware step.

    Remember: The question is who is listening?, not if?.

    Shining a light into the BBC Radio archives

    Monday, December 15th, 2014

    Shining a light into the BBC Radio archives by Yves Raimond, Matt Hynes, and Rob Cooper.

    From the post:

    comma

    One of the biggest challenges for the BBC Archive is how to open up our enormous collection of radio programmes. As we’ve been broadcasting since 1922 we’ve got an archive of almost 100 years of audio recordings, representing a unique cultural and historical resource.

    But the big problem is how to make it searchable. Many of the programmes have little or no meta-data, and the whole collection is far too large to process through human efforts alone.

    Help is at hand. Over the last five years or so, technologies such as automated speech recognition, speaker identification and automated tagging have reached a level of accuracy where we can start to get impressive results for the right type of audio. By automatically analysing sound files and making informed decisions about the content and speakers, these tools can effectively help to fill in the missing gaps in our archive’s meta-data.

    The Kiwi set of speech processing algorithms

    COMMA is built on a set of speech processing algorithms called Kiwi. Back in 2011, BBC R&D were given access to a very large speech radio archive, the BBC World Service archive, which at the time had very little meta-data. In order to build our prototype around this archive we developed a number of speech processing algorithms, reusing open-source building blocks where possible. We then built the following workflow out of these algorithms:

    • Speaker segmentation, identification and gender detection (using LIUM diarization toolkitdiarize-jruby and ruby-lsh). This process is also known as diarisation. Essentially an audio file is automatically divided into segments according to the identity of the speaker. The algorithm can show us who is speaking and at what point in the sound clip.
    • Speech-to-text for the detected speech segments (using CMU Sphinx). At this point the spoken audio is translated as accurately as possible into readable text. This algorithm uses models built from a wide range of BBC data.
    • Automated tagging with DBpedia identifiers. DBpedia is a large database holding structured data extracted from Wikipedia. The automatic tagging process creates the searchable meta-data that ultimately allows us to access the archives much more easily. This process uses a tool we developed called ‘Mango’.

    ,,,

    COMMA is due to launch some time in April 2015. If you’d like to be kept informed of our progress you can sign up for occasional email updates here. We’re also looking for early adopters to test the platform, so please contact us if you’re a cultural institution, media company or business that has large audio data-set you want to make searchable.

    This article was written by Yves Raimond (lead engineer, BBC R&D), Matt Hynes (senior software engineer, BBC R&D) and Rob Cooper (development producer, BBC R&D)

    I don’t have a large audio data-set but I am certainly going to be following this project. The results should be useful in and of themselves, to say nothing of being a good starting point for further tagging. I wonder if the BBC Sanskrit broadcasts are going to be available? I will have to check on that.

    Without diminishing the achievements of other institutions, the efforts of the BBC, the British Library, and the British Museum are truly remarkable.

    I first saw this in a tweet by Mike Jones.