Showing posts with label thoughts. Show all posts
Showing posts with label thoughts. Show all posts

Thursday, June 27, 2013

How are Publishers Rewarded for Exposing Linked-Data?

Disclaimer: This document poses the question asked in the title without offering anything which can be reasonably called an answer.  It is my hope that members of the relevant communities who know more than I do on the topic can provide some insight into potential answers.

Utilization of linked-data by applications is predicated upon the existence of accessible linked-data.  In much the same way that publishers were told they could put their content online in formats like HTML, we now tell them they can expose their information as linked-data using formats like RDFa and JSON-LD.  However, where the former had the fairly obvious benefit of making the publisher’s content visible to human consumers, the later seems to lack any immediately realizable end.

Lofty visions of automated agents and reasoning engines which would operate over the ever expanding web of linked-data have been touted since around the time that the phrase “Semantic Web” was being coined.  It was indicated that, by exposing their information as linked-data, publishers could “hook-in” to these agents, making themselves visible to their users.  Such agents however have yet to materialize and seem to be touted less and less from my observation, which I feel is unfortunate, but that’s an entirely different post.

Many “Semantic Web Applications” which I have seen, either in writings online or at conferences, are in-fact Semantically-Enabled applications which use some Semantic technologies, some of which have been born of the forge of the Semantic Web, in combination with other technologies (AI, NPL, etc), in order to build up a triple store and reason over it or operate upon it.  These have been interesting applications but they are not Semantic Web applications as they go well beyond the boundary of utilizing exposed linked-data.  Further, they are often operating in specialized domains over semantically enabled datasets and not over arbitrarily exposed information on publisher’s sites.  As such, in and of themselves, such applications are providing no reward to the average content publisher.

Search Engines have taken up the torch to some extent in the form of Schema.org.  This gives publishers a reason to expose their data as well as a concrete vocabulary to use in its exposition, but it positions the “Semantic Web” to be re-branded as “SEO 2.0,” which in my mind would be a loss of the initial vision.  It is, however, from what I can find, the only realizable end of publishing linked-data along with your content.

When talking about / attempting to explain the Semantic Web to friends, family, and co-workers, I often employ the Chicken or the Egg metaphor in accounting for why this concept has not yet become ubiquitous (though I am sure some would disagree with the statement that it is not ubiquitous).  If we take the Chicken to be the accessible data and the Egg to be applications, we may be getting closer to the Chicken, with the help of efforts such as Schema.org to an extent, which would give the Egg a raison d’ĂȘtre.  In my experience the lack of a reasonable Egg to point to greatly complicates the task of encouraging publishers to expose their information as linked-data.

A final note: I would be very happy to be corrected on my observations and to be told that the Egg already exists (ideally by being pointed to such an Egg).

Saturday, April 21, 2012

Stability and Fragility of Namespaces

While working on a blog post which will soon be published (and linked to) on CITYTECH, Inc's site, I mentally ran across the subject of updating a namespace definition within a domain of data.  More concretely, I was considering why Apache Jackrabbit does not allow updates (or unregistrations for that matter) to namespaces once they are established within a given Repository.  It seemed to me initially that allowing changes to namespaces would be valuable, for example, as new versions of an ontology were published.  Considering the matter further however I began to realize how dangerous such a practice would be.

Consider the following scenario.  Let us say that I told you that I bought a new shirt, the color of which was blue.  However, instead of saying that, I said "I bought a new shirt, tar klonor blue."  You would look quizically at me and perhaps question your hearing of my statement, because what I didn't tell you was that I had contrived a new statement, "tar klonor" which meant "having the color".

This example is somewhat absurd in and of itself but it is essentially what would happen to a machine's ability to understand linked-data statements if a namespace were changed in the domain of the data being represented.

Consider now a more concrete example.  Let us say that I have created a food ontology identified by the URI http://example.com/food/v1.0/.  Now let us say that I have two documents containing food information.  I present these documents in listing 1 and listing 2 respectively.

@prefix food: <http://example.com/food/v1.0/> .
@prefix ex: <http://somesite.com/things/> .


ex:americanCheese a food:Cheese .
ex:lambChop a food:Meat .
ex:apple a food:Fruit .
ex:provalone a food:Cheese .

Listing 1

@prefix food: <http://example.com/food/v1.0/> .
@prefix me: <http://self.com/items/> .


me:camembert a food:Cheese . 
me:brusselSprouts a food:Vegetable .

Listing 2

If I were to search over this dataset for all resources which are http://example.com/food/v1.0/Cheese, I would find three things.  Now, let us say that I create a new version of the ontology and identify it with the URI http://example.com/food/v2.0/ however I only update document 1 with the new namespace.  Now, if I perform the same search, I only find one thing.  I know in my heart of hearts that I meant for http://example.com/food/v1.0/Cheese to be semantically equivalant to http://example.com/food/v2.0/Cheese, however a system looking at this data has no reason to make this connection (nor should it).  It is equivalant to me creating the new phrase "tar klonor" and then assuming that you will understand the meaning of my sentances including said phrase.  One solution to the problem would be to update the second document along with the first, however this assumes that all documents and systems utilizing the URI of this ontology are under your control.  If your ontology is more widely used, this is not viable.

OWL does expose some mechanisms for handling this (see http://www.w3.org/TR/2004/REC-owl-guide-20040210/#OntologyVersioning), however these seem cumbersome and rely on a system to implement the understanding of the versioning constraints.  Further, some of the more robust constraints are only available in OWL-Full, the implementation and usage of which is far from trivial.  And this only covers ontology versioning.  What about specifications which are not ontologies?

Some time ago, a version 1.0 of Dublin Core existed and there was talk of creating a version 2.0 after version 1.1 (some old notes on this and on translations of DC).  Imagine if you already had your data published in DC 1.0 when 1.1 was pushed out.  The change to version 1.1 updated the URI of the specification and as such, made your data obsolete for all intents and purposes.  Given this, it's clear why the RDF URI still has "1999" in it.  Also, on some specification sites (such as FOAF) you will find statements concerning the stability of the specification URI, specifically, it's not changing.

Coming to the end of this rather long winded discussion, I suppose the bottom line is, Jackrabbit does not need to support changes to namespaces, because namespaces shouldn't change.  Updating a namespace in your domain of data is equivalent to updating all nodes of data using that namespace, which should not be taken lightly.


DnL8Tar
-PCM

Friday, January 20, 2012

Automatic Shopping List - A Use Case for Linked Data

For some time now I've wanted shopping lists automatically generated from recipes.  In fact, I suspect there are sites which will perform this action on a single recipe basis, though I don't have the patience to search for them now.  From a single recipe it is fairly trivial to generate a shopping list.  In fact, one could simply print the recipe - there is normally an ingredients list included.  Working this way however, one would need to go shopping every single time they wanted to cook something, or one would need to print a number of recipes and reconcile overlap in the list manually. 


Consider then an application which would take n recipes and aggregate the ingredients into a shopping list.  Conceptually this is of value in situations where one is disciplined enough to plan their meals for the whole week. In a family setting I imagine the value is increased as you can plan meals for the whole family for a period of time and make sure you are minimizing the trips to the grocery store.  


There is a question of how the application would receive information about the recipes for which it is generating a list however.  This is where open linked data comes in.  If recipe providers (Food Network, All Chefs, etc) were to expose their recipe data as linked data, it could be collected into a single system and generally reasoned upon presuming it followed or was coerced into a somewhat standard ontology (or set thereof).  A user would enter a URL for a recipe into the application, indicating when they planned to prepare the dish.  After entering a number of recipes, the user would elect to generate a shopping list encompassing a certain time period and the system would generate the list based on all of the recipes at once.


One can imagine a number of optimizations to the results, but one which comes to mind, and is most often made manifest in my personal life, is a reconciliation of the shopping list with the contents of the pantry.  Last weekend, I was preparing beef stew.  Knowing that I would need Worcestershire sauce, I picked up a bottle, not remembering if I already had some.  When I arrived at home I found that I had an already unopened bottle sitting in my cupboard.  Had I known this while I was at the store, I could have avoided the expenditure.  Similarly, if the system, which I am endeavoring to contrive with this post, had access to exposed data concerning the users larder, it could adjust the list making sure it included only items which the user would need to buy to supplement their current stock.  


Considering the concept in complete reverse, the system could also suggest recipes based on what you currently have "in stock."  This feature may be more useful than those previously described depending on your lifestyle.


DnL8Tar
-PCM

Sunday, May 15, 2011

On Data Not Accessible in an Expected Tuple Format - A Continuation of the Magic Hat Discussion

In my prior post I brought up the concept of using a "Magic Hat" mechanism to obtain resources regardless of the "physical" location of the resources.  An assumption which this mechanism makes is that those resources which are to be retrieved can be retrieved in a standardized format, in this case, data tuples.  There are however many systems housing interesting and useful data which do not serve this data in such a format directly.  

What is to be done then in the situation where access to data contained in such a  system through the Magic Hat is desirable is largely based  on what mechanisms the system affords for acquiring its data in general.  If the system provides no API via which to pull the data but does expose the ability to edit the template code with which the data is rendered, RDFa could be added to the template code in order to add semantics to the rendered data, making it more accessible by a tuple store.  This approach is quite limited however as, while it facilitates the pulling of a single resource through the magic hat, it does little to ease the asking of a question about all of the data contained in the system.  To elaborate, consider the request "provide all data created by [A] which concerns the topic [B] and was written after the date [C]."  Such a request would be hard to satisfy in such a system as we are largely limited to considering a single resource at a time.
 
The provision of an API has become quite standard in online systems however and any newer system or application which does not expose one is most likely a) in beta with an API on the way, or b) not worth using.  How requests are to be made to the API and the format of data returned by the API is left to the designer of the API and is not guaranteed to match the request and data format which the Magic Hat expects.  As such, some coercion on both the request and response end is necessary.  I've taken this approach with the retrieval of Blogger data and will speak further on it in my next post.  For now I shall suffice it to say that such a mechanism allows for much more robust requests but is limited by the complexity of coercion necessary to make more and more complex requests.

An approach which I have seen / heard taken by some applications is to retrieve all data of interest from a system and house it locally in a tuple format.  While this has the upside of allowing for robust data requests and combinations using expected request and response formats, it does have the significant overhead of maintaining consistency between the local system and the remote system which is still the owner of the data.

DnL8Tar
-PCM

Friday, March 25, 2011

On Information and Trust

Within a Semantic Web application (I supposed it does not have to be a Semantic Web application, however that's the context I have been working in and thus is most immediate to this thought) you, as a user, may be presented with and using data from multiple sources.  Since the data is all semantically linked it does not necessarily need to reside in a single store but can be obtained from another authority on the subject of the resource in question.  


A question this brought up for me was one of trust.  Trust specifically concerning the validity of the information retrieved from a data source.  Since anyone can post information about a subject given a canonical URI, the "truthfulness" of the data is questionable at best.  For instance, presuming the URI http://example.com/people/ChuckNorris was intended to reference the Chuck Norris, I could, on my website, post a tuple such as


http://example.com/people/ChuckNorris foaf:nick "Prissy Pants Magee" 


which would establish that Chuck Norris has at least one nickname and that nickname is "Prissy Pants Magee." 


Perhaps, because I was not thinking clearly, this was a concern of mine for some time.  However I eventually came to the realization that, simply moving towards a Semantic Web does not remove the onus of understanding the sources of data from the user.  As has always been the case, the consumer of information must know that the information is coming from a source that the consumer trusts.  A user trusting one newspaper and not another will be analogous to a user trusting one data source concerning Chuck Norris facts and not another.  And similarly, any tuple I may post on my website will be used only insofar as I (or my website) am (is) regarded as an authority on the subject matter.


DnL8Tar
-PCM

Tuesday, September 14, 2010

On Imagination and Experience

I posit the following query as a mental exercise.
Consider, if you will, a person, completely blind from birth, never having seen light of any type, knowing only sightlessness.  If asked, how would you describe color to this person?
I will reserve my own answer for it is irrelevant to the point overall however consider yours.  Indeed it would be a difficult task, but why?  Color is a concept familiar to us.  We see color every day and use color to differentiate one thing from another, or one type from another.  Further we can know of colors, their form, absent of their application to matter.  Color may be used in describing a thing known, or in conceiving a thing to be known.  Why then should so familiar a thing be so difficult to portray.

The answer lies in the blind persons lack of experience of color, or of sight at all.  Indeed color is perceived through the sense of sight and we only have experience of color through this sense.  The blind person however has no experience with this sense, no basis on which to know color.  And try as we may, we can only convey an abstraction of color, but can not impart a knowledge of color itself.

Our knowledge is built of our collective experiences, our perception of those experiences, and our rumination thereof.  We can give to our self more knowledge, or deeper knowledge, by building upon that gained by our experience, but we can not give our self knowledge without experience.  That is to say, we can not give our self knowledge from nothing.

What then is left to say of imagination?  For we do certainly contrive those things which do not exist and know them in or mind, and can give to others information about them so that they may know them as well.  But these things which are the objects of our imagination are not brought out of nothing, but are also brought of our own experience.  As an example, consider a dragon, a creature which, in this world, does not exist, but is brought forth from our imagination and made manifest to others.  Even this creature however is not brought fourth from nothing and we can consider its components and their familiarity to our experience (skin like a reptile, facial structure like a horse, wings like a bird, etc).  The objects of our imagination, however far they are from reality, are never so far that they do not extend from our experience.


DnL8Tar
-PCM

Thursday, September 2, 2010

On Knowledge and Information

http://informationr.net/ir/8-1/paper144.html


A decent paper focusing on the misuse (and overall meaninglessness) of the phrase "Knowledge Management."  This has given me reason for pause in my own use of the phrase and after some consideration I believe it may be best to cease using the phrase altogether.  


In the interest of definition
Knowledge is that which is known to an individual as gained by the individuals experience and as colored by the individuals perception.  Any attempt to "transfer Knowledge" from one individual to another (as the phrase is used popularly) entails first the transformation of Knowledge into Information via some medium (ie, writing, speaking, etc) and second the consumption of the Information and its interpretation resulting in Knowledge.  It is important to note here however that the Knowledge transferred into Information is not and can not be the same Knowledge known to the consumer of the Information.  It can be similar, and certainly if the Information is clear and the consumer is up to the task of consumption, it will be, but as it is imputed to the individual it will not be the same in two individuals.


DnL8Tar
-PCM

Saturday, July 24, 2010

A Brief Consideration of Continuity and it's Implications on Computing

Consider a computer, stripped of its layers of abstraction which expose to users access to its facilities.  At the core of a computer is a binary processor acting on logical operations.  Each such operation is executed at a set increment, controlled by the computers clock speed.  These small operations are combined together through layers of abstraction to produce the robust functionality we have come to expect from a machine such as this.  

On such a system one with proper knowledge could produce an animation of, let's say, a ball rolling across a table.  During the animation the ball traverses the table, rolling from a point we shall call a to a second point which we shall call b.  Once produced the user can watch a playback of the animation and, assuming it was well produced, would not be surprised to see what they would naturally expect to see if a ball were indeed rolling across a table.

Further, consider a true ball in the physical world rolling across a true table from the same point a to the same point b.  Along the path between these points, as many points as you please may be observed  as well, and never can we observe so many points in between that there are not more which can be observed.  This continuity of movement, and of being, is innate to objects in the natural world.  The ball from our example moves continuously along the path from a to b, through the infinity of measurable (measurable either arithmetically or geometrically) points over an amount of time also exhibiting continuity.

In comparison, the animation which is produced consists of a series of frames displayed at a set rate.  Each frame can be said to be a measurement of the position of our virtual ball at a specific point in time.  From this animation select two adjacent frames.  In between these frames could be inserted a third showing the position of the ball at a point in time half way between the point represented in the first frame and the point represented in the  second frame.  Again, selecting this new frame and it's adjacent frame, another new frame could be made to show the point in time representative of the mid point of these two frames.  Such a process could be repeated as many times as we please without reaching a final result.

This is representative of a large limitation of a binary representation of continuity.  While binary operations are able to perform logical processes with accuracy, the representation of continuity is limited to an approximation.  The producer of our ball and table animation is forced to select points in time which will be shown on the frames of the animation and then calculate where the ball would be at that point in time.  While such an approximation is more than suitable for the domain of animation (as the eye can be tricked into seeing continuity) it is less desirable for a true simulation where the interaction of events across a continuous flow of time is to be observed as opposed to the state of events at a discreet point in time.

DnL8Tar
-PCM

Monday, July 12, 2010

RDF as a Database Solution

With "linked data" taking a front seat in my research efforts over the last few years, my use of RDF has increased in state from "I have no idea what RDF is" to "RDF is the backbone of my system architecture."  I have a tenancy to go over board with such things, especially when learning a new technology or technique.  Case and point, Recursive-Living.  The JavaScript framework I created and build Recursive-Living on top of was made solely to deepen my knowledge of AJAX style programming and it's various pit falls.  This being the case, the entire site is delivered via AJAX and JavaScript.  While this is admittedly overkill, such a rigorous approach to the development of conceptual knowledge yielded many fruits, most detailing why such an architecture is ineffectual in practice. 


Two web applications which I have spoken of before, if not in this forum then in Buzz, Transaction-Tracker and Comic-Post, I have built wholly on an RDF backend, again, for the sake of deepening my knowledge of RDF, RDFa, and OWL.  The intention for both of these applications was, in part, the study and extrapolation of the needs of a suitable framework upon which such applications could be build, the very same framework of which I have spoken previously.  Delving deeper into this intention is fodder for a future post which I might write.  During development of the aforementioned applications I have compiled a good deal of information around the conceptual use of such systems.


There are many distinct advantages to using an RDF database, or an RDF abstraction over a database schema, as a data store.  These advantages are considered in comparison to more traditional data store architectures, where tables are designed around the data and relationships between the tables are conceptualized on smooth white boards spanning long walls in stark rooms.  I list below the advantages which I have formalized to date.

  • The tables with which the developer need be concerned are limited to the RDF implementation.  If new types of data are added to an existing application or if existing types of data are augmented or altered in any way, the tables themselves need not change, only the data on the tables.
  • Relationships between resources are concretely defined.  This is as opposed to the more abstract definition of relationships between tables which often occurs based on key columns.  Resources relationships are defined by linking their URI's together via a property or attribute.  Since these definitions are concrete they can be easily traversed without the need for explicit codification of each relationship.  Such an implementation allows for processes such as a recursive resource lookup, where a resource, and all it's related resources, and all resources related to those resources, etc, are found, built, and returned to the user.
  • RDFa output is made trivial if your backend is already in RDF.  While not particularly important at the moment, with more web applications being built around this concept, it will soon become crucial in promoting a site and helping automated agents find and navigate said site.
  • Type checking and existential validation is simplified in this model since each URI is a resource in your data store.  If using RDF in conjunction with RDFS (and OWL, though OWL is not immediately necessary for this purpose), the type of a URI (or class of a URI) can be quickly verified via a SPARQL ASK query.  Again, since the application need only trouble itself with one set of tables, only one such ASK query need be defined.  In comparison, one would need to define a query for each table in a classic data store infrastructure in order to see if a particularly keyed row existed.

Certainly there are disadvantages as well and a topic for future consideration which I have not begun to tread upon is performance comparisons.  These again, I leave as fodder for a future post.  


DnL8Tar
-PCM

Wednesday, May 5, 2010

Powered By [enter name here]

I have been a long time advocate of home-brewing (code, not beer... unless you know what you're doing, thought I suppose the same could be said for both).  I refused, perhaps from a desire to learn, or perhaps sheer arrogance, to use packages which were not included in a programming languages core.  For me, to download a library, which someone else had written, and use it in my own code was closer to blasphemy than my sensibilities would let me tread.  In retrospect the distinction between those things distributed with a language via it's core and that which one would need to download seems paltry, though my adherence to a rejection of the later has led to some interesting thoughts on levels of knowledge.

Once, I heard it said, "You do not need to know how to build a car in order to drive a car."  Statistically this is sound simply based on the observable myriad of car-drivers in comparison to the relatively few auto-repair workers.  The statement may also be applied to domains outside the realm of automotive engineering.  For our purposes, we will apply it to programming.  My prior personal decision to set-aside external packages and, by inference, frameworks, forced me to learn how to build these packages and frameworks.  However, there was a limit to the depths to which I was willing to delve.  I did not, for instance, write my own programming language before using PHP.  Similarly I did not write an operating system before turning on my computer.  Essentially, the distinction between functionality provided by a languages core distribution and add-on functionality established the domain, or level, of knowledge I was endeavoring to internalize through work on my personal and academic projects. 

Often I've said (sometimes to others but usually to myself) that a serious Computer Scientist should not use a built package or framework without having make a comparable package themselves.  The statement is too broad however and does not accurately reflect the concept of knowledge levels.  It would be better stated in the context of a particular domain, such as "someone who is serious about learning how content management systems work should not heavily use a content management system without building their own."  Similarly, one could say "someone who is serious about understanding how a programming language works should write their own programming language."  Creating such a system may seem a waste of time in so much as the system will more than likely be tossed aside at a later point in favor of a more robust and mature system of the same genre.  The experience however is edifying.

One should explore ones purpose before setting forth on an endeavor of this ilk.  Are you attempting to truly understand how a system, or genre of systems, works, or are you attempting to setup something that functions as expected without any grandiose visions of future changes outside the bounds of what a particular framework provides?  If your purpose is the latter, home-brewing is admittedly overkill and, assuming a first attempt at brewing said genre of system, would result in an application of questionable stability.

This line of thinking was largely inspired by my explorations in JQuery, a Java Script library which I have given a wide berth until recently.  While elegant, understanding of the library, and usage thereof, is not trivial, barring a willingness to code on blind faith and the kindness of support forum members.  Statements like "functions are first class citizens," with which the JQuery documentation is rife, hold little weight to someone who has not coded a Javascript closure.  

Thoughts?  

DnL8Tar
-PCM

Sunday, April 25, 2010

TTT Discourse

Rules of a Tic Tac Toe universe
  1. A statement is valid if it is made in order [rule 2] and if the square of the board indicated by the statement is un-owned [rule 3].
  2. A statement made by a player is considered "in order" if the prior valid statement was made by the opposite player or if the player is player 1 and their statement is the first of the game.
  3. When a player makes a valid statement, ownership of the square on the board coinciding with the statement is given to the player.
  4. If a player owns three squares all in the same row, column, or diagonal of the board, that player has "won" the current game. The opposite player has "lost".
  5. If all squares of the board are owned and neither player has won, the game is drawn. Neither player wins and neither player looses.
  6. Once a winner is established or the game is drawn, ownership of all squares on the board is revoked returning each square to a neutral state. This starts a new game or iteration/generation of the universe.


I submit for consideration a reflection on an abstraction of the game of Tic-Tac-Toe.  This abstraction considers the moves of a tic-tac-toe game as a universe of discourse.  The rules of tic-tac-toe are, along the same line, the rules of the universe, governing the reaction of the universe to each "statement" made.

Three distinct entities make up the universe: two players and a board upon which the players interact.  The players I will refer to as player 1 and player 2 when differentiation is necessary.  The board is, to those who are familiar with the game, a standard tic-tac-toe board, consisting of 9 squares laid out in a 3x3 grid.  Players are the only acting entities.  Their interaction is performed via the board using the vocabulary set forth by the universe.  For purposes of the abstraction the specific semantics of the vocabulary are inconsequential so long as a player can formulate any valid tic-tac-toe move via use of the vocabulary.  The following set is one such vocabulary: { (-1,1), (0,1), (1,1), (-1,0), (0,0), (1,0), (-1,-1), (0,-1), (1,-1) } where each element in the set represents a square on the board.  Using a standard Cartesian coordinate system, the element (-1,-1) represents a move to the bottom left square and similarly the element (1,1) represents a move to the top right square.

At any point a player may make a statement by selecting a single element from the vocabulary.  The universe then responds to the statement.  As noted the players may make any statement at any time.  The notion of "turn" or "valid move" is not instilled in the player but in the rules of the universe.  This being the case it is possible for the players to make statements which do not change the state of the board based on the rules of the universe.  For instance, assume player 1 makes the statement (1,1) and player 2 subsequently makes the same statement.  The second statement is, in a sense, rejected by the universe as the state of the board is not changed by the statement. 

Such statements I will call meaningless.  Thus the meaningfulness of a statement is determined by whether the statement results in a change of the state of the board.  If it does, then the statement is considered meaningful, otherwise it is considered meaningless.  Of course, the rules of the universe determine whether a given statement changes the state of the board.

Given this abstraction one can conceive of a learning algorithm applied to the players such that the players learn from each statement made.  In order to learn the players must be directed toward some goal.  If they are not then the algorithm has no bearing with which to process each move.  One obvious goal of the game would be to win.  A similar and secondary goal would be to not lose.  A third rule of some usefulness would be to make only meaningful statements.

Given these goals, coupled with well defined rules over the universe, the players could conceivably "learn" the rules of tic-tac-toe based on their observations  of the reactions of the universe to their statements and the other player's statements.  Further, the players may develop stratagem coinciding with their goals of winning and not loosing (though this later item is of lesser interest to me at the moment).


DnL8Tar
-PCM

Thursday, April 1, 2010

Resource Driven Development - A Definition

As a thirty second Google (Topeka) search for the phrase "Resource Driven Development" did not turn up anything relevant to my purpose, and as I never allocate much more than thirty seconds to a Google search, I am going on blog-record with what I would define the phrase to mean.

Resource Driven Development refers to development done within a framework where the behavior of a system is based on resources defined within a system-specific ontology.  

This model of development is precisely what my framework hopes to deliver, abstracting presentation logic into resources and data population logic into the ontology.

DnL8Tar
-PCM

Monday, December 7, 2009

Upon Embarking

Before delving into the subject matter of which I intend to write I would do well to make note of the purpose of this work. This will stand for my own sake as much as the reader for without direction my mind has a want to wander and soon sets aside the task at hand.

As I debated with myself concerning the creation of this blog ( a word of which I am not particularly fond and intend to use only as necessity dictates ) my initial notion was to utilize it solely as a log of my progress in creation of the web development framework with which I have loosely tasked myself. I felt this subject matter would be rather dry however and as my comic is largely in hiatus until the fruition of said framework or such a time as its value no longer outweighs the cost of time spent, I needed an outlet for my more satirical musings.

To that end I posit the following purposes in order of import as I see them.
  • General thoughts and notes of interest concerning my study of logic
  • Explanations of the aforementioned web development framework, along with my progress in it's creation
  • Topics of more general interest, of which "Thought on Working From Home" comes readily to mind
It is my hope that if I stray further from these topics than seems reasonable that the reader will gently correct me once, and more forcibly the second time.

DnL8Tar
-PCM