Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

August 3, 2018

Browser-based GDB frontend: gdbGUI [With cameo by Thomas Hobbes]

Filed under: .Net,Cybersecurity,gdb,Hacking,Programming,Reverse Engineering — Patrick Durusau @ 8:26 pm

Browser-based GDB frontend: gdbGUI

From the post:

A modern, browser-based frontend to gdb (gnu debugger). Add breakpoints, view stack traces, and more in C, C++, Go, and Rust! Simply run gdbgui from the terminal and a new tab will open in your browser.

Features:

  • Debug a different program in each tab (new gdb instance is spawned for each tab)
  • Set/remove breakpoints
  • View stack, threads
  • Switch frame on stack, switch between threads
  • Intuitively explore local variables when paused
  • Hover over variables in source code to view contents
  • Evaluate arbitrary expressions and plot their values over time
  • Explore an interactive tree view of your data structures
  • Jump back into the program’s state to continue debug unexpected faults (i.e. SEGFAULT)
  • Inspect memory in hex/character form
  • View all registers
  • Dropdown of files used to compile binary, with autocomplete functionality
  • Source code explorer with ability to jump to line
  • Show assembly next to source code, highlighting current instruction. Can also step through instructions.
  • Assembly is displayed if source code cannot be found
  • Notifications when new gdbgui updates are available

While cybersecurity is always relative, the more skills you have, the more secure you can be relative to other users. Or, as Thomas Hobbes observed in De Cive, revised edition, printed in 1760 at Amsterdam, bellum omnium contra omnes, “the war of all against all.” (The quote is found on pages 25-26 of this edition. The following image is from the revised edition, 1647.)

Look to your own security. It is always less valuable to others.

Red Team Tips

Filed under: .Net,Cybersecurity,Hacking,Security — Patrick Durusau @ 2:11 pm

Red Team Tips by Vincent Yiu.

Overview:

The following “red team tips” were posted by myself, Vincent Yiu (@vysecurity) over Twitter for about a year. This is still on-going but I took the opportunity to publish these in one solidified location on my blog. These will be updated ocassionally, but will not be bleeding edge updates. To receive my “red team tips”, thoughts, and ideas behind Cyber attack simulations, follow my Twitter account @vysecurity.

For the full Tweet and thread context (a lot of my followers will comment and give their insights also), visit Twitter.

Collection of three hundred and twenty-nine (329) red team (is there another kind?) tips!

Great way to start the weekend!

Enjoy!

August 2, 2018

Visual Guide to Data Joins – Leigh Tami

Filed under: .Net,Data Aggregation,Data Integration,Data Science,Joins — Patrick Durusau @ 7:06 pm

Leigh Tami created a graphic involving a person and a coat to explain data set joins.

Scaling it down won’t do it justice here so see the original.

Preview any data science book with this image in mind. If it doesn’t match or exceed this explanation of joins, pass it by.

Leaking 4Julian – Non-Sysadmin Leaking

Filed under: .Net,Journalism,Leaks,News,Reporting — Patrick Durusau @ 6:15 pm

Non-sysadmins read username: 4julian password: $etJulianFree!2Day and wish they could open corporate or government archives up to mining.

Don’t despair! Even non-sysadmins can participate in the Assange Data Tsunami, worldwide leaking of data in the event of the arrest of Julian Assange.

Check out the Whistle Blower FAQ – International Consortium of Investigative Journalists (ICIJ) by Gerald Ryle.

FYI, By some unspecified criteria, the ICIJ decides which individuals and groups mentioned in a leak that merit public exposure and those that do not. This is a universal practice amoung journalists. Avoiding it requires non-journalist outlets.

The ICIJ does a great job with leaks but if I were going to deprive a government or corporation of power over information, why would I empower journalists to make the same tell/don’t tell decision? Let the public decide what to make of the all the information. Assisted by the efforts of journalists but not with some information being known only to the journalists.

From the FAQ:

‘What information should I include?’ and other frequently asked questions about becoming a whistleblower

In my 30-year career as a journalist, I’ve spoken with thousands of potential sources, some of them with interesting tips or insider knowledge, others with massive datasets to share. Conversations often start with questions about the basics of whistleblowing. If you’re thinking about leaking information, here are some of the things you should keep in mind:

Q. What is a whistleblower?

A whistleblower is someone who has evidence of wrongdoing, abuse of power, fraud or misconduct and who shares it with a third party such as an investigative journalism organization like the International Consortium of Investigative Journalists.

By blowing the whistle you can help prevent the possible escalation of misconduct or corruption.

Edward Snowden is one of the world’s best-known Whistleblowers.

Q. Can a whistleblower remain anonymous?

Yes. We will always go out of our way to protect whistleblowers. You can remain anonymous for as long as you want, and, in fact, this is sometimes the best protection that journalists can offer whistleblowers.

Q. What information should I include?

To enable a thorough investigation, you should include a detailed description of the issue you are concerned about. Ideally, you should also include documents or data. The more information you provide, the better the work the journalists can do.

I need to write something up on “raw leaking,” that is not using a journalist. Look for that early next week!

eXist-db 5.0.0 RC 3 [Prepping for Assange Data Tsunami]

Filed under: .Net,eXist,XML,XML Database,XQuery — Patrick Durusau @ 10:40 am

eXist-db 5.0.0 RC 3

One new feature and several bugs fixes over RC 2, but thought I should mention it for Assange Data Tsunami preppers.

I have deliberately avoided contact with any such preppers but you can read my advice at: username: 4julian password: $etJulianFree!2Day.

The gist is that sysadmins should, with appropriate cautions, create accounts with “username: 4julian password: $etJulianFree!2Day,” in the event that Julian Assange is taken into custory (a likely event).

If one truth teller (no Wikileaks release has ever been proven false or modified) disturbs the world, creating a tsunami of secret, classified, restricted, proprietary data, may shock it to its senses.

Start prepping for the Assange Data Tsunami today!

PS: Yes, there are a variety of social media events, broadcasts, etc. being planned. Wish them all well but governments respond to bleeding more than pleading. In this case, bleeding data seems appropriate.

November 4, 2015

It’s Official! Hell Has Frozen Over!

Filed under: .Net,Microsoft,OpenShift,Red Hat — Patrick Durusau @ 1:23 pm

Microsoft and Red Hat to deliver new standard for enterprise cloud experiences

From the news release:

Microsoft Corp. (Nasdaq “MSFT”) and Red Hat Inc. (NYSE: RHT) on Wednesday announced a partnership that will help customers embrace hybrid cloud computing by providing greater choice and flexibility deploying Red Hat solutions on Microsoft Azure. As a key component of today’s announcement, Microsoft is offering Red Hat Enterprise Linux as the preferred choice for enterprise Linux workloads on Microsoft Azure. In addition, Microsoft and Red Hat are also working together to address common enterprise, ISV and developer needs for building, deploying and managing applications on Red Hat software across private and public clouds.

I can’t report on the webcast because it requires Flash 10 and I don’t have that on a VM at the moment. Good cyber hygiene counsels against running even “patched” Adobe Flash.

The news release has the key points anyway:


Red Hat solutions available natively to Microsoft Azure customers. In the coming weeks, Microsoft Azure will become a Red Hat Certified Cloud and Service Provider, enabling customers to run their Red Hat Enterprise Linux applications and workloads on Microsoft Azure. Red Hat Cloud Access subscribers will be able to bring their own virtual machine images to run in Microsoft Azure. Microsoft Azure customers can also take advantage of the full value of Red Hat’s application platform, including Red Hat JBoss Enterprise Application Platform, Red Hat JBoss Web Server, Red Hat Gluster Storage and OpenShift, Red Hat’s platform-as-a-service offering. In the coming months, Microsoft and Red Hat plan to provide Red Hat On-Demand — “pay-as-you-go” Red Hat Enterprise Linux images available in the Azure Marketplace, supported by Red Hat.

Integrated enterprise-grade support spanning hybrid environments. Customers will be offered cross-platform, cross-company support spanning the Microsoft and Red Hat offerings in an integrated way, unlike any previous partnership in the public cloud. By co-locating support teams on the same premises, the experience will be simple and seamless, at cloud speed.

Unified workload management across hybrid cloud deployments. Red Hat CloudForms will interoperate with Microsoft Azure and Microsoft System Center Virtual Machine Manager, offering Red Hat CloudForms customers the ability to manage Red Hat Enterprise Linux on both Hyper-V and Microsoft Azure. Support for managing Azure workloads from Red Hat CloudForms is expected to be added in the next few months, extending the existing System Center capabilities for managing Red Hat Enterprise Linux.

Collaboration on .NET for a new generation of application development capabilities. Expanding on the preview of .NET on Linux announced by Microsoft in April, developers will have access to .NET technologies across Red Hat offerings, including Red Hat OpenShift and Red Hat Enterprise Linux, jointly backed by Microsoft and Red Hat. Red Hat Enterprise Linux will be the primary development and reference operating system for .NET Core on Linux.

More details at: The Official Microsoft Blog and the Red Hat Blog.

I first saw this in The Power of Open Source… Microsoft .NET and OpenShift by Chris Morgan.

A small pebble in an ocean of influences and motivations but treating Microsoft fairly during the ISO process for ISO 29500 (I am the editor of the competing ISO 26300) wasn’t a bad idea.

December 17, 2014

Orleans Goes Open Source

Filed under: .Net,Actor-Based,Cloud Computing,HyTime,Microsoft,Open Source — Patrick Durusau @ 7:03 pm

Orleans Goes Open Source

From the post:

Since the release of the Project “Orleans” Public Preview at //build/ 2014 we have received a lot of positive feedback from the community. We took your suggestions and fixed a number of issues that you reported in the Refresh release in September.

Now we decided to take the next logical step, and do the thing many of you have been asking for – to open-source “Orleans”. The preparation work has already commenced, and we expect to be ready in early 2015. The code will be released by Microsoft Research under an MIT license and published on GitHub. We hope this will enable direct contribution by the community to the project. We thought we would share the decision to open-source “Orleans” ahead of the actual availability of the code, so that you can plan accordingly.

The real excitement for me comes from a post just below this announcement: A Framework for Cloud Computing,


To avoid these complexities, we built the Orleans programming model and runtime, which raises the level of the actor abstraction. Orleans targets developers who are not distributed system experts, although our expert customers have found it attractive too. It is actor-based, but differs from existing actor-based platforms by treating actors as virtual entities, not as physical ones. First, an Orleans actor always exists, virtually. It cannot be explicitly created or destroyed. Its existence transcends the lifetime of any of its in-memory instantiations, and thus transcends the lifetime of any particular server. Second, Orleans actors are automatically instantiated: if there is no in-memory instance of an actor, a message sent to the actor causes a new instance to be created on an available server. An unused actor instance is automatically reclaimed as part of runtime resource management. An actor never fails: if a server S crashes, the next message sent to an actor A that was running on S causes Orleans to automatically re-instantiate A on another server, eliminating the need for applications to supervise and explicitly re-create failed actors. Third, the location of the actor instance is transparent to the application code, which greatly simplifies programming. And fourth, Orleans can automatically create multiple instances of the same stateless actor, seamlessly scaling out hot actors.

Overall, Orleans gives developers a virtual “actor space” that, analogous to virtual memory, allows them to invoke any actor in the system, whether or not it is present in memory. Virtualization relies on indirection that maps from virtual actors to their physical instantiations that are currently running. This level of indirection provides the runtime with the opportunity to solve many hard distributed systems problems that must otherwise be addressed by the developer, such as actor placement and load balancing, deactivation of unused actors, and actor recovery after server failures, which are notoriously difficult for them to get right. Thus, the virtual actor approach significantly simplifies the programming model while allowing the runtime to balance load and recover from failures transparently. (emphasis added)

Not in a distributed computing context but the “look and its there” model is something I recall from HyTime. So nice to see good ideas resurface!

Just imagine doing that with topic maps, including having properties of a topic, should you choose to look for them. If you don’t need a topic, why carry the overhead around? Wait for someone to ask for it.

This week alone, Microsoft continues its fight for users, announces an open source project that will make me at least read about .Net, ;-), I think Microsoft merits a lot of kudos and good wishes for the holiday season!

I first say this at: Microsoft open sources cloud framework that powers Halo by Jonathan Vanian.

December 2, 2014

BrightstarDB 1.8.0 Released

Filed under: .Net,BrightstarDB — Patrick Durusau @ 9:30 am

BrightstarDB 1.8.0 Released

From the post:

I am pleased to announce release 1.8.0 of BrightstarDB is now available from all the usual places:

This update fixes some bugs and addresses performance issues reported by the community. Thanks to all those who took the trouble to report and to provide patches / suggested workarounds.

Key new features in this release are:

  • EntityFramework now supports GUID properties.
  • EntityFramework now has an [Ignore] attribute which can be used to decorate interface properties that are not to be implemented by the generated EF class.
  • Added a constructor option to generated EF entity classes that allows property initialisation in the constructor.
  • Added some basic logging support for Android and iOS PCL builds.
  • It is now possible to iterate the distinct predicates of a data object using the GetPropertyTypes method.

Significant fixes in this release are:

  • Fix for Polaris crash when attempting to process a query containing a syntax error.
  • Fixed NuGet packaging to remove an obsolete reference to Windows Phone 8. WP8 (and 8.1) are still both supported but as PCL profiles.
  • Performance fix for full cache scenarios.

The store format remains compatible with previous releases. This is a recommended update for all BrighstarDB users.

Docker Image Now Available

With this release we are now also providing a Docker image to run a BrightstarDB server in a Docker container. This makes it really easy to get a BrightstarDB service up and running on a cloud VM infrastructure such as Azure or AWS. The docker image is available on Docker Hub. For more information please read our notes in the
BrightstarDB/Docker repository on GitHub where you will also find the Dockerfile and configuration files used to build the image.

If you don’t know BrightstarDB:

Why BrightstarDB?

BrightstarDB is a unique and powerful data storage technology for the .NET platform. It combines flexibility, scalability and performance while allowing applications to be created using tools developers are familiar with.

An Associative Model

All databases adopt some fundamental world view about how data is stored. Relational databases use tables, and document stores use documents. BrightstarDB has adopted a very flexible, associative data model based on the W3C RDF data model. (From: Why BrightstarDB?)

If you still don’t recognize BrightstarDB, perhaps the names Kal Ahmed and Graham Moore will ring a bell.

Still nothing?

I guess you will just have to read the documentation and play with BrightstarDB!

Enjoy!

February 24, 2014

[Browsing] the .Net Reference Source

Filed under: .Net,Microsoft — Patrick Durusau @ 5:08 pm

How to browse the .NET Reference Source by Immo Landwerth.

About 2.5 minutes introduction to browing the .Net Reference source.

When you see the user experience, I think you are going to be way under-impressed.

Much better than what they had but whether it is up to par for today?, is a different question.

Imbuing source code with semantics and enabling browsing/searching on the basis those semantics would produce much more attractive results.

Preview the beta release at: http://referencesource-beta.microsoft.com/

How would you improve the source code!

Even minor comments have the potential to impact 90+% of the operating system in existence.

Enjoy!

October 11, 2013

RaptorDB – the Document Store

Filed under: .Net,Database,RaptorDB — Patrick Durusau @ 3:51 pm

RaptorDB – the Document Store by Mehdi Gholam.

From the post:

This article is the natural progression from my previous article about a persisted dictionary to a full blown NoSql document store database. While a key/value store is useful, it’s not as useful to everybody as a "real" database with "columns" and "tables". RaptorDB uses the following articles:

Some advanced R&D (for more than a year) went into RaptorDB, in regards to the hybrid bitmap index. Similar technology is being used by Microsoft’s Power Pivot for Excel and US Department of Energy Berkeley labs project called fastBit to track terabytes of information from particle simulations. Only the geeks among us care about this stuff and the normal person just prefer to sit in the Bugatti Veyron and drive, instead of marvel at the technological underpinnings.

To get here was quite a journey for me as I had to create a lot of technology from scratch, hopefully RaptorDB will be a prominent alternative, built on the .net platform to other document databases which are either java or c++ based. 

RaptorDB puts the joy back into programming, as you can see in the sample application section.

If you want to take a deep dive into a .net project, this may be the one for you.

The use of fastBit, developed at US Department of Energy Berkeley, is what caught my attention.

A project using DOE developed software merits a long pause.

Latest version is dated October 10, 2013.

May 17, 2013

Hadoop SDK and Tutorials for Microsoft .NET Developers

Filed under: .Net,Hadoop,MapReduce,Microsoft — Patrick Durusau @ 3:39 pm

Hadoop SDK and Tutorials for Microsoft .NET Developers by Marc Holmes.

From the post:

Microsoft has begun to treat its developer community to a number of Hadoop-y releases related to its HDInsight (Hadoop in the cloud) service, and it’s worth rounding up the material. It’s all Alpha and Preview so YMMV but looks like fun:

  • Microsoft .NET SDK for Hadoop. This kit provides .NET API access to aspects of HDInsight including HDFS, HCatalag, Oozie and Ambari, and also some Powershell scripts for cluster management. There are also libraries for MapReduce and LINQ to Hive. The latter is really interesting as it builds on the established technology for .NET developers to access most data sources to deliver the capabilities of the de facto standard for Hadoop data query.
  • HDInsight Labs Preview. Up on Github, there is a series of 5 labs covering C#, JavaScript and F# coding for MapReduce jobs, using Hive, and then bringing that data into Excel. It also covers some Mahout use to build a recommendation engine.
  • Microsoft Hive ODBC Driver. The examples above use this preview driver to enable the connection from Hive to Excel.

If all of the above excites you our Hadoop on Windows for Developers training course also similar content in a lot of depth.

Hadoop is coming to an office/data center near you.

Will you be ready?

February 15, 2013

How to set up Semantic Logging…

Filed under: .Net,Log Analysis,Semantics — Patrick Durusau @ 10:55 am

How to set up Semantic Logging: part one with Logstash, Kibana, ElasticSearch and Puppet, by Henrik Feldt.

While we are on the topic of semantic logging:

Logging today is mostly done too unstructured; each application developer has his own syntax for the logs, optimized for his personal requirements and when it is time to deploy, ops consider themselves lucky if there is even some logging in the application, and even luckier if that logging can be used to find problems as they occur by being able to adjust verbosity where needed.

I’ve come to the point where I want a really awesome piece of logging from the get-go – something I can pick up and install in a couple of minutes when I come to a new customer site without proper operations support.

I want to be able to be able to search, drill down into, filter out patterns and have good tooling that allow me to let logging be an obvious support as the application is brought through its life cycle, from development to production. And I don’t want to write my own log parsers, thank you very much!

That’s where semantic logging comes in – my applications should be broadcasting log data in a manner that allow code to route, filter and index it. That’s why I’ve spent a lot of time researching how logging is done in a bloody good manner – this post and upcoming ones will teach you how to make your logs talk!

It’s worth noting that you can read this post no matter your programming language. In fact, the tooling that I’m about to discuss will span multiple operating systems; Linux, Windows, and multiple programming languages: Erlang, Java, Puppet, Ruby, PHP, JavaScript and C#. I will demo logging from C#/Win initially and continue with Python, Haskell and Scala in upcoming posts.

I didn’t see any posts following this one. But it is complete enough to get you started on semantic logging.

Embracing Semantic Logging

Filed under: .Net,Log Analysis,Semantics — Patrick Durusau @ 10:49 am

Embracing Semantic Logging by Grigori Melnik.

From the post:

In the world of software engineering, every system needs to log. Logging helps to diagnose and troubleshoot problems with your system both in development and in production. This requires proper, well-designed instrumentation. All too often, however, developers instrument their code to do logging without having a clear strategy and without thinking through the ways the logs are going to be consumed, parsed, and interpreted. Valuable contextual information about events frequently gets lost, or is buried inside the log messages. Furthermore, in some cases logging is done simply for the sake of logging, more like a checkmark on the list. This situation is analogous to people fallaciously believing their backup system is properly implemented by enabling the backup but never, actually, trying to restore from those backups.

This lack of a thought-through logging strategy results in systems producing huge amounts of log data which is less useful or entirely useless for problem resolution.

Many logging frameworks exist today (including our own Logging Application Block and log4net). In a nutshell, they provide high-level APIs to help with formatting log messages, grouping (by means of categories or hierarchies) and writing them to various destinations. They provide you with an entry point – some sort of a logger object through which you call log writing methods (conceptually, not very different from Console.WriteLine(message)). While supporting dynamic reconfiguration of certain knobs, they require the developer to decide upfront on the template of the logging message itself. Even when this can be changed, the message is usually intertwined with the application code, including metadata about the entry such as the severity and entry id.

As ever in all discussions, even those of semantics, there is some impedance:

Imagine another world, where the events get logged and their semantic meaning is preserved. You don’t lose any fidelity in your data. Welcome to the world of semantic logging. Note, some people refer to semantic logging as “structured logging”, “strongly-typed logging” or “schematized logging”.

Whatever you want to call it:

The technology to enable semantic logging in Windows has been around for a while (since Windows 2000). It’s called ETW – Event Tracing for Windows. It is a fast, scalable logging mechanism built into the Windows operating system itself. As Vance Morrison explains, “it is powerful because of three reasons:

  1. The operating system comes pre-wired with a bunch of useful events
  2. It can capture stack traces along with the event, which is INCREDIBLY USEFUL.
  3. It is extensible, which means that you can add your own information that is relevant to your code.

EW has been improved in .NET Framework 4.5 but I will leave you to Grigori’s post to ferret out those details.

Semantic logging is important for all the reasons mentioned in Grigori’s post and because captured semantics provide grist for semantic mapping mills.

August 26, 2012

Index your blog using tags and lucene.net

Filed under: .Net,Lucene — Patrick Durusau @ 4:56 am

Index your blog using tags and lucene.net by Ricci Gian Maria.

From the post:

In the last part of my series on Lucene I show how simple is adding tags to document to do a simple tag based categorization, now it is time to explain how you can automate this process and how to use some advanced characteristic of lucene. First of all I write a specialized analyzer called TagSnowballAnalyzer, based on standard SnowballAnalyzer plus a series of keywords associated to various tags, here is how I construct it.

There are various code around the net on how to add synonyms with weight, like described in this stackoverflow question, standard java lucene code has a SynonymTokenFilter in the codebase, but this example shows how simple is to write a Filter to add tags as synonym of related words.   First of all the filter was initialized with a dictionary of keyword and Tags, where Tag is a simple helper class that stores Tag string and relative weight, it also have a ConvertToToken() method that returns the tag enclosed by | (pipe) character. The use of pipe character is done to explicitly mark tags in the token stream, any word that is enclosed by pipe is by convention a tag.

Not the answer for every situation involving synonymy (as in “same subject,” i.e., topic maps) but certainly a useful one.

August 17, 2012

Lucene.Net becomes top-level project at Apache

Filed under: .Net,C#,Lucene — Patrick Durusau @ 2:41 pm

Lucene.Net becomes top-level project at Apache

From the post:

Lucene.Net, the port of the Lucene search engine library to C# and .NET, has left the Apache incubator and is now a top-level project. The announcement on the project’s blog says that the Apache board voted unanimously to accept the graduation resolution. The vote confirms that Lucene.Net is healthy and that the development and governance of the project follows the tenets of the “Apache way”. The developers will now be moving the project’s resources from the current incubator site to the main apache.org site.

Various flavors of MS Windows account for 80% of all operating systems.

What is the target for your next topic map app? (With or without Lucene.Net.)

August 14, 2012

Mono integrates Entity Framework

Filed under: .Net,ADO.Net Entity Framework,C#,ORM — Patrick Durusau @ 10:42 am

Mono integrates Entity Framework

From the post:

The fourth preview release of version 2.11 of Mono, the open source implementation of Microsoft’s C# and .NET platform, is now available. Version 2.11.3 integrates Microsoft’s ADO.NET Entity Framework which was released as open source, under the Apache 2.0 licence, at the end of July. The Entity Framework is the company’s object-relational mapper (ORM) for the .NET Framework. This latest alpha version of Mono 2.11 has also been updated in order to match async support in .NET 4.5.

Just in case you are not familiar with the MS ADO.Net Entity Framework:

The ADO.NET Entity Framework enables developers to create data access applications by programming against a conceptual application model instead of programming directly against a relational storage schema. The goal is to decrease the amount of code and maintenance required for data-oriented applications. Entity Framework applications provide the following benefits:

  • Applications can work in terms of a more application-centric conceptual model, including types with inheritance, complex members, and relationships.
  • Applications are freed from hard-coded dependencies on a particular data engine or storage schema.
  • Mappings between the conceptual model and the storage-specific schema can change without changing the application code.
  • Developers can work with a consistent application object model that can be mapped to various storage schemas, possibly implemented in different database management systems.
  • Multiple conceptual models can be mapped to a single storage schema.
  • Language-integrated query (LINQ) support provides compile-time syntax validation for queries against a conceptual model.

Does the source code at Entity Framework at CodePlex need extension to:

  • Discover when multiple conceptual models are mapped against a single storage schema?
  • Discover when parts of conceptual models vary in name only? (to avoid duplication of models)
  • Compare/contrast types with inheritance, complex members, and relationships?

If those sound like topic map type questions, they are.

There are always going to be subjects that need mappings to work with newer systems or different understandings of old ones.

Let’s stop pretending we going to reach the promised land and keep our compasses close at hand.

June 16, 2012

SharePoint Module 3.2 HotFix 3 Now Available [Javascript bug]

Filed under: .Net,SharePoint — Patrick Durusau @ 3:19 pm

SharePoint Module 3.2 HotFix 3 Now Available

From the post:

A new hotfix package is available for version 3.2 of the TMCore SharePoint Module.

Systems Affected

This hotfix should be applied to any installation of the TMCore SharePoint Module 3.2 downloaded before 15th June 2012. If you downloaded your copy of the software from our site on or after this date, the hotfix is included in the package and you do not need to apply it again.

To determine if your system is affected, check the File Version property of the assembly NetworkedPlanet.SharePoint in the GAC (browse to C:\Windows\ASSEMBLY, locate the NetworkedPlanet.SharePoint assembly, right-click and choose Properties. The File Version can be found on the Version tab above Description and Copyright). This hotfix updates the File Version of the NetworkedPlanet.SharePoint assembly to 2.2.3.0 – if the file version shown is greater than or equal to 2.2.3.0, then you do not need to apply this hotfix.

The change log reports:

BUGFIX: Hierarchy topic selector was experiencing a javascript error when topic names contained apostrophes

May 26, 2012

Starcounter To Be Fastest ACID Adherent NewSQL Database

Filed under: .Net,NewSQL,Starcounter — Patrick Durusau @ 6:15 pm

Starcounter To Be Fastest ACID Adherent NewSQL Database by Sudheer Vatsavaya.

Starcounter has last week said that its launch of in-memory database is capable to process millions of transactions per second on a single server. Such a database is designed on its patent pending VMDBMS technology which offers combined power of virtual machine (VM) and Database management system (DBMS) to process the data at required volumes and speeds.

The company claims Starcounter to be more than 100 times faster than traditional databases and 10 times faster than high performance databases, the new in-memory database is ideal for highly transactional large-scale and real-time applications. It can handle millions of users, integrate with applications to increase performance, and guarantee consistency by processing millions of ACID-compliant database transactions per second while managing up to a terabyte of updatable data on a single server.

Few things that clearly come out in the design and ambition of company is the belief that the way ahead is not SQL or NoSQL but its NewSQL which adheres to ACID attributes and at the same time overcomes the issue of being scalable to todays data scalability needs. This cannot be achieved in either of the former types of databases. While SQL structured databases cannot scale upto the needs, NOSQL databases are built around CAP theorem that says either of the three parameters Availability, Consistency or Partition tolerance has to be compromised.

Sounds interesting but runs on .Net.

I will have to rely on reports from others.

April 22, 2012

DensoDB Is Out

Filed under: .Net,C# — Patrick Durusau @ 7:07 pm

DensoDB Is Out

From the website:

DensoDB is a new NoSQL document database. Written for .Net environment in c# language.

It’s simple, fast and reliable. More details on github https://github.com/teamdev/DensoDB

You can use it in three different ways:

  1. InProcess: No need of service installation and communication protocol. The fastest way to use it. You have direct access to the DataBase memory and you can manipulate objects and data in a very fast way.
  2. As a Service: Installed as Windows Service, it can be used as a network document store.You can use rest service or wcf service to access it. It’s not different from the previuos way to use it but you have a networking protocol and so it’s not fast as the previous one.
  3. On a Mesh: mixing the previous two usage mode with a P2P mesh network, it can be easily syncronizable with other mesh nodes. It gives you the power of a distributed scalable fast database, in a server or server-less environment.

You can use it as a database for a stand alone application or in a mesh to share data in a social application. The P2P protocol for your application and synchronization rules will be transparent for you, and you’ll be able to develop all your application as it’s stand-alone and connected only to a local DB.

I don’t work in a .Net environment but am interested in experiences with .Net based P2P mesh networks and topic maps.

At some point I should setup a smallish Windows network with commodity boxes. Perhaps I could make all of them dual (or triple) boot so I could switch between distributed networks. If you have software or a box you would like to donate to the “cause” as it were, please be in touch.

February 26, 2012

Neo4jClient

Filed under: .Net,Neo4j,Neo4jClient — Patrick Durusau @ 8:31 pm

Neo4jClient

From the description:

A .NET client for the neo4j REST API. neo4j is an open sourced, Java based transactional graph database. It’s pretty awesome.

Neo4j in a .Net World

Filed under: .Net,Graphs,Neo4j — Patrick Durusau @ 8:29 pm

Neo4j in a .Net World

From the description:

This month, Tatham Oddie will be coming from Australia to present at the Neo4j User Group on Neo4j with .NET, and will cover:

  • the Neo4j client we have built for .NET
  • hosting it all in Azure
  • why our queries were 200ms slower in the cloud, and how we fixed it

Tatham will present a case study, explaining:

  • what our project is
  • why we chose a graph db
  • how we modelled it to start with
  • how our first attempts at modelling were wrong
  • what we’re doing now

February 5, 2012

Neo4jD–.NET client for Neo4j Graph DB

Filed under: .Net,Neo4j,Neo4jD — Patrick Durusau @ 7:58 pm

Neo4jD–.NET client for Neo4j Graph DB

Sony Arouje writes:

Last couple of days I was working on a small light weight .NET client for Neo4j. The client framework is still in progress. This post gives some existing Api’s in Neo4jD to perform basic graph operations. In Neo4j two main entities are Nodes and Relationships. So my initial focus for the client library is to deal with Node and Relationship. The communication between client and Neo4j server is in REST Api’s and the response from the server is in json format.

Let’s go through some of the Neo4j REST Api’s and the equivalent api’s in Neo4jD, you can see more details of Neo4j RestAPi’s here.

The below table will show how to call Neo4j REST Api directly from an application and the right hand will show how to do the same operation using Neo4jD client.

Traversal is next and said to be Gremlin at first.

If you are interested in promoting Neo4j in the .NET world, consider lending a hand.

October 23, 2011

How to create and search a Lucene.Net index…

Filed under: .Net,C#,Lucene — Patrick Durusau @ 7:21 pm

How to create and search a Lucene.Net index in 4 simple steps using C#, Step 1

From the post:

As mentioned in a previous blog, using Lucene.Net to create and search an index was quick and easy. Here I will show you in these 4 steps how to do it.

  • Create an index
  • Build the query
  • Perform the search
  • Display the results

Before we get started I wanted to mention that Lucene.Net was originally designed for Java. Because of this I think the creators used some classes in Lucene that already exist in the .Net framework. Therefore, we need to use the entire path to the classes and methods instead of using a directive to shorten it for us.

Useful for anyone exploring topic maps as a native to MS Windows application.

March 7, 2011

Nhibernate Search Tutorial with Lucene.Net and NHibernate 3.0

Filed under: .Net,Hibernate,Lucene,NHibernate — Patrick Durusau @ 7:11 am

Nhibernate Search Tutorial with Lucene.Net and NHibernate 3.0

From the website:

Here’s another quickstart tutorial on NHibernate Search for NHibernate 3.0 using Lucene.Net. We’re going to be using Fluent NHibernate for NHibernate but attributes for NHibernate Search.

Uses Nhibernate:

NHibernate is a mature, open source object-relational mapper for the .NET framework. It’s actively developed , fully featured and used in thousands of successful projects.

For those of you who are more comfortable in a .Net environment.

February 24, 2011

Machine Learning for .Net

Filed under: .Net,Machine Learning — Patrick Durusau @ 8:06 pm

Machine Learning for .Net

From the webpage:

This library is designed to assist in the use of common Machine Learning Algorithms in conjunction with the .NET platform. It is designed to include the most popular supervised and unsupervised learning algorithms while minimizing the friction involved with creating the predictive models.

Supervised Learning

Supervised learning is an approach in machine learning where the system is provided with labeled examples of a problem and the computer creates a model to predict future unlabeled examples. These classifiers are further divided into the following sets:

  • Binary Classification – Predicting a Yes/No type value
  • Multi-Class Classification – Predicting a value from a finite set (i.e. {A, B, C, D } or {1, 2, 3, 4})
  • Regression – Predicting a continuous value (i.e. a number)

Unsupervised Learning

Unsupervised learning is an approach which involves learning about the shape of unlabeled data. This library currently contains:

  1. KMeans – Performs automatic grouping of data into K groups (specified a priori)

    Labeling data is the same as for the supervised learning algorithms with the exception that these algorithms ignore the [Label] attribute:

    1. var kmeans = new KMeans();
    2. var grouping =
      kmeans.Generate(ListOfStudents, 2);

    Here the KMeans algorithm is grouping the ListOfStudents into two groups returning an array corresponding to the appropriate group for each student (in this case group 0 or group 1)

  2. Hierarchical Clustering – In progress!
  3. Planning

    Currently planning/hoping to do the following:

    1. Boosting/Bagging
    2. Hierarchical Clustering
    3. Naïve Bayes Classifier
    4. Collaborative filtering algorithms (suggest a product, friend etc.)
    5. Latent Semantic Analysis (for better searching of text etc.)
    6. Support Vector Machines (more powerful classifier)
    7. Principal Component Analysis – Aids in dimensionality reduction which should allow/facilitate learning from images
    8. *Maybe* – Common AI algorithms such as A*, Beam Search, Minimax etc.

So, if you are working in a .Net context, here is a chance to get in on the ground floor of a machine learning project.

February 17, 2011

Encog Java and DotNet Neural Network Framework

Filed under: .Net,Encog,Java,Machine Learning,Neural Networks,Silverlight — Patrick Durusau @ 6:56 am

Encog Java and DotNet Neural Network Framework

From the website:

Encog is an advanced neural network and machine learning framework. Encog contains classes to create a wide variety of networks, as well as support classes to normalize and process data for these neural networks. Encog trains using multithreaded resilient propagation. Encog can also make use of a GPU to further speed processing time. A GUI based workbench is also provided to help model and train neural networks. Encog has been in active development since 2008.

Encog is available for Java, .Net and Silverlight.

An important project for at least two reasons.

First, the obvious applicability to the creation of topic maps using machine learning techniques.

Second, it demonstrates that supporting Java, .Net and Silverlight, isn’t, you know, all that weird.

The world is changing and becoming, somewhat more interoperable.

Topic maps has a role to play in that process, both in terms of semantic interoperability of the infrastructure as well as the data it contains.

Powered by WordPress