Skip to content

My first constructed language: Uamua. A spoken/written language based on physics, math, logic and programming.

This post is describing a language I am creating for a story I am working on writing.

I knew Tolkien invented a language for the background of his work, and I know people have learned Klingon, but I never thought about trying to make my own language before discovering the Reddit sub conlangs.  It’s amazing that places like this exist.  We live in the future.

Uamua is currently a syllable based language like Japanese, where words are made of syllable components.  My intention is that each syllable position has an effect on later syllables, as they are related to their preceding syllable.  In this way it is like compound-nouns in German, in that syllables have a general meaning, and when placed sequentially, the sequence colors the meaning of the coming syllables.

So you can build up words to the effect of:  movement-planar-limited-to-axis-x-z

This would describe a something moving (like a car), that only moves on the X and Z axis (in 3D, XZ is the ground, Y is height).  This is a scientific and mathematically oriented language.

My language, Uamua, is “concept” language in that I want to pack a number of ideas into:

1. Primary is that everything is separated into “physical” and “logical” by the initiation of a word. 

Not sure whether this will just be nouns, or all words at the moment.  Im also not sure if my current understanding of English grammar will very applicable to the way it turns out, which I’ll describe in the next section.

If a syllable sound of “U” (ooo) is used, then is it physically-based.  This means that it can be measured as a physical thing; it has properties of mass, volume, heat, or other measurability such spins or other properties of quarks.  Anything that is “real” starts with an U.

If a syllable sound of an “A” (aah) is used, then it is logically-based.  This means it is not measureable, like above, things like “ideas” or “words” are not measurable in a physical way (you can measure brain waves, but those are brain waves about words, not the “words themselves”).

The first sound of any word is either an “U” or “A” sound, which tells the listener/reader that this word is either about a physical thing or a logical thing.  This neatly categories things we can measure, from things that we cant, which is the primary purpose of this language (which has a philosophical purpose in the story I am writing).

“U” and “A” were chosen because I believe they are normally the first sounds babies make, and in trying to test my own voice, they appear to be the most natural of the vowel sounds, “ooo” and “aah” having very little tension on the vocal-related muscles to voice.

In this way I see them as the primitive or fundamental sounds, which can be sustained the longest with the most efficiency.

2. The syllables themselves are meant to be taken on primitives of description, so that all words can be built up by this description. 

This is initiated with a consonant, like “mu” or “ma”.

A word “uma” would start with the “U” sound, and then have a “Ma” syllable.  “U” at the start means this word is about something physical.  The consonant letter/sound “M” means “waveform” in Uaumua.  A waveform is a sequence of frequencies over time, a time series of data, a series of values representing magnitude or other metrics.

As a deeper meaning, the “M” sound was chosen because it can be made continuously as a humming sound:  Mmmmmmmmmmh.  This is a consonant sound which works as a wave form.  In the Latin character set it also looks like a waveform: MMMMMMMMM, and it will look like a waveform similarly in the Uamua written language.

So “Ma” is the concept of “waveform” which is “logical” because it has an “A” in it.  “U” is the physical sound and “A” is the logical sound, so “Ma” is a logical idea of a waveform, and “Mu” is a physical idea of a waveform.

So “Uma” is “U” + “Ma” which means “the physical representation of the logical waveform”.

“Umu” would mean “the physical representation of the physical waveform”.  This could be understood as a “sound wave”, like what we hear when we speak, or something plays guitar, or the radio.

“Uma” would be the logical representation, which would be like an MP3 when it is sitting in your computers memory.  It is the abstract concept of the waveform that makes up a sound.  The electricity running through the computer and memory chip is physical, but the “idea” of the musical waveform is “uma”.

When “U” and “A” are put together, “UA” they mean “the physical and logical representations”.  So “Mua” means the “physical and logical representations of a waveform”

And, “Uamua” (the language name) means “the physical and logical topic of a waveform’s physical and logical representation”, or to put it another way:  “spoken, written and informational language or data”.

3. There are no value judgements in Uamua.  Nothing like “good” or “bad”. 

There are no value descriptions.  So when describing things, the effects are described, and can be described accurately.  But you cant say “this is bad”.  You can say “That is inefficient in aspects: X, Y, Z”

Further, you wont say things like, “I dont like that”, but will say things like “X property of that is less efficiently matched with Y property when doing Z”.  This will sound more natural in Uamua than it does in English because in English it is normal to say “I dont like that”, without specifying the reasons, and in Uamua that is impossible to say.  The specific differences must be stated.

4. Uamua is a spoken/written language, and also a programming language, like C++, Java, Python, etc. 

It’s syntax is versioned, and all documents should start with their Uamua version number, as each version of Uamua will map to different contexts.  Uamua will have a syntax that can be validated as formal math is validated, and can be “executed” like a program.

Uamua will have functional, imperative, relational and other programming methods built into the grammar of the language, so that recursion can be explicitly stated for a section of a concept, while understood to not apply to other parts.  In this way sentences and paragraphs can be used as conversation, but could also be used to perform work if executed on a computer (like running a program), and could be replayed in the future as programs are re-run (“running a script”), creating a different result if given a different context/environment (input).

Uamua versions will change as the understanding of the speakers change.  As new discoveries in physic are made, there will be mapping changes to the language, so that the language describes the world in an accurate method.

Speakers of the language will have to learn about changes in science/math, and the updates of the language, so that they can continue to communicate with others as the version of the language changes.  This will be structured in the stories so that it is not a burden, but allows the path to learning how the best understanding of the universe currently works.

In the story, there will be cut-over dates when the version of Uamua is changing, and businesses will prepare for the cut-over date, and so on.  It will be in the news a lot, so people can talk about the upcoming changes.  Similar to when new words are added to the dictionary is sometimes reported in the news, but much more important and relevant.  It will also be like how laws for business might change, because any interactions would take on a possible new meaning, so contracts written in different versions would map to different definitions, and thus would be different agreements.

In this way children will automatically learn the best science and math that is available at the time, because as they grow up their language will be mapped to those exact ideas, so they will simply learn to think in those ways.

They will also grow up knowing they can change their language, and knowing the process of how to go about doing it.  Language changes would be submitted for discussion and integration efforts would be made to make the least amount of changes to best bring about the updates in the language, if they are found to make a important enough improvement.

5. Labels in Uamua always start with a consonant sound. 

A label would be like a persons name, or a brand-name object “kleenex” would be an accurate word in Uamua.  In this way Uamua can borrow all existing words from other languages, and use them as labels for what they mean in those languages (not their Uamua description, which would be physically/logically based programmable formal descriptions).

Words that start with vowels would have a special consonant prefixed to them.  Current I am thinking of a “compound consonant sound” like “Z’t” which would be like “zit”, but with an inflection between the consonant sounds.  So “orange” is “Z’torange” in Uamua.  I chose “Z’t” as it seems like a fairly unique sonic construct, that would allow for a maximum of foreign vowel labels to be used, and it is easy to recognize as a loan word because it is the only thing inflected like that.

Some further syllable constructs are:

A “K” starting syllable at the end of a word, as a question marker (like Japanese uses with -ka and Chinese uses with -ma).

A word with this would be:  “Uamuakua”, which means “what are the physical and logical waveforms of physics and logic?”

A “L” starting syllable is used a negation (like “not” something), so a word like “umulu” means “it isnt a physical waveform”.

Adding this all together we could make “umuluka” which would mean “is it data (logical) for a physical waveform?”

I’ve just posted a question on /r/AskScience about the best type of quantum mechanics or other physical sets of descriptions to best map to this language, here:  http://np.reddit.com/r/askscience/comments/2z0789/what_is_the_count_of_quantum_mechanics_primitives/

Im hoping to get a good way to map the way physicists currently map their understanding of reality into the language to get Version 1.0 kicked off, so that I can try to merge the above ideas with a mapping to quarks and their spins/states to actually describe the effects they produce (mass, movement, gravity, etc), so that Uamua version 1.0 is mapped to our Standard Model of physics.

As physics and other science progresses, Uamua would increase it’s version number and change to adopt those concepts, keeping pace with scientific discovery and human knowledge over time in a different way than say English changes over time.

More Unity3D Model Arrangement

Doing a little more model arrangement before moving on to characters.

I did the initial terrain generation with Terrain Composer.  I’m using RTP to do height/normal map for the terrain textures.  Unity 4.6.

unity_old_14

Castle courtyard with paths and billboard grass.

unity_old_9

Added lots of billboard grass to and trees to the scene.

unity_old_13

A view from a cat walk.

unity_old_10

Toy sized castle. It’s fun jumping over the walls and running over the buildings as a giant. Needs destruction capabilities.

Trying different tree combinations.

Trying different tree combinations.

Unity3D is fun. Unity3D is easy.

These days, I like to wait until fashion has already come and gone before giving lots of things a shot.  On the plus side of this, I dont waste time on things that use up time and dont add long term value.  On the minus side of this, I dont use some cool things for a long time, when I could have been.  That minus side has a plus side though, which is that when I do use them they are usually mature.

Such is the case with Blender, and more recently Unity3D.   Blender used to crash all the time, and the interface was weird as well.  Now it’s stable and feels quite nice to me.

Unity3D didn’t have a lot of demos when I first tried it, and lots of limitations.  Now it’s amazing.  The Asset Store is filled with things that make it even more amazing.  PlayMaker is an amazing plug-in that handles most things you want to do in Unity with Finite State Machines, and a clean visual interface.  Being able to use the Unity editor and work quickly with assets, seeing everything as I go, and then being able to set up a lot of interactivity and event based communication between custom code and plugins is awesome.  Developing things has never felt faster than this.

Here are some things I’ve been throwing together (saying things I “made” feels wrong, it’s like making a collage):

unity_old_0

The first scene I put together. Moving around some purchased assets.

unity_old_1

This asset doesnt have an inside. I might add one later, or just use it as a model to create something new.

unity_old_2

This house has 3 doors that I hooked up with PlayMaker to have distance checks to open the doors. Later I’ll switch this to a Use input.

unity_old_4

The distance based door in it’s open state. PlayMaker has a debug mode that writes a label over the asset position so you can see it’s state in real-time. Very useful.

unity_old_5

Some bedroom furniture. Pretty sparse.

unity_old_6

Upstairs room with the deck, caught the door in it’s Close state.

New ways of learning dogma

Earlier tonight, I watched Bret Victor’s video, The Future of Programming, in which he goes over the history of computing up to 1973, with a forward looking into the future of today, staying in 1973 character.  An excellent video I highly recommend, but that’s not the reason I’m writing this.

I have been thinking about one of his central tenets:

  1. …that in order to advance beyond simply improving efficiency of the current ways of doing things…
  2. …into improving the current ways of doing things into even better ways…
  3. …that adapt to their current environment better and provide better results and interactivity…
  4. …then we need to be able to overcome dogmatic thinking.

Then I thought, what we really need to start doing this is to stop teaching in a method where learning is learning dogma.  It is a problem we currently face, I believe, as teaching is meant to say “here is the right way of doing things, please learn all about it and how to do it”.  And people do, but now they think this is The Right Way, and other ways that deviate from this are not The Right Way.

This creates a slow evolution of ideas, and it means that the ideas are not fitted to each other as the realities of their situation exists, but as they were dogmatically learned.

This means one ideal that must be held for advancement on this problem is that we must change learning to not treat what is learned as The Truth or The Right Way.  That method of learning means that the learning is dead knowledge, it does not adapt to what environment it will be used in.  It does not adapt to the way the rest of the world changes.

This can obviously taken in the naive sense that “no one is learning anything new!” kind of hyperbole, but I mean it in the way I think Bret meant it, in that we can do a lot more cool things if we change the way we do things, and there are practical examples of missed opportunities.  But only missed for as long as we don’t learn to be free to learn those and many better ways of doing things that haven’t been thought yet.

If delivering code was like delivering food…

If coding practices were to replace cooking practices in restaurants, this would be a common sentiment:

I only eat at restaurants where the first and primary consideration is how the other chefs and cooks feel about the quality of their products.

I require that they keep their kitchen clean at the sacrifice of giving me my food in a timely manner, or at the cost of the price on the menu. I could care less what the actual price of the meal is, or if it takes several hours or days to arrive, as long as the kitchen is run very clean and orderly, preferably prioritized by popular industry consensus.

I prefer each chef or cook to work with the same implements, even if they have different jobs. I think all of them should be trained to use all the tools, and that they should concentrate on general tools, that can be used for many things, instead of special case tools. A whip cream dispenser and a spatula inherit many similar attributes, both are solid objects that can hold things on their surface, and if designed properly they can also both hold and dispense whip cream. So they should be designed to be modular for more general use.

I am typically appalled at the general non-reuse of materials in restaurants.  For example, many customers could share the same food in a simple Meals-as-a-Service food item instance system, where several customers could subscribe to the same food simultaneously.

They also show a distinct lack of interest in maintenance on their product after delivery.

Companies are a form of Artificial Intelligence

[This article is an excerpt from a comment I made on a Reddit thread, but I thought it was interesting enough to be it’s own post.  Original thread.]

Normally we think about AI as what the MIT guys started working on way-back-when, or else from like Neuromancer type stories where there are computer software that becomes cognizant and stuff.

Another term that’s been used in a similar but less different vein is Artificial Life, which removes the Intelligent factor and just basically says “a non-DNA evolutionary based entity” (like us, dogs or amoebas) that shows characteristics of living, usually like a good number of the 7 old life-signs: Movement, Respiration, Sensitivity, Growth, Reproduction, Excretion, Nutrition.

When I look at how organizations work (companies, governments, other organizations (Triple H, Red Cross)) they have many of the same signs that regular life has.

  • Movement: Government will occupy new spaces with their military, and in the old days tribes would move with the herds.  Companies will relocate to a different state to get better taxation rates.
  • Respiration: This is actually just a molecular gas exchange version of Nutrition, as we use the oxygen for energy and release CO2 as Excretion.  Artificial Life shouldnt be expected to have separate Nutrition and Respiration, since these are kinda arbitrary based on us and our kind.
  • Sensitivity: Bad press because of a product?  Company releases a recall to minimize damages, reacting to their environment.
  • Reproduction: Greenpeace will have splinter groups that have a new charter based on the old charter.
  • Growth: Google doubled in company size, year after year until they hit about 50,000 total (employees/contractors).  All organisms hit a maturation size where for the purpose of that organism it becomes harder to grow.
  • Excretion: Products and services could be one kind of “output” from an Artificial Life.  Carbon being released or chemicals being left over after manufacturing could be another.  Reams of paper being used up could be another.
  • Nutrition: These types of Artificial Life seem to use money in the forms of revenues or taxes to sustain themselves, through hiring their employees/actors to perform the required tasks.  They also require raw resources such as components that become computers that are used, paper that is created, carpets, office space, land.  Since they arent DNA based, what it takes to feed them has a different tilt to it.

So that’s the Artificial Life component.  The biggest thing is that they have a life independent of the people that create them.  They can outlive their founders (Disney), they can replace their founders (Yahoo).  They can change many times and still be seen as essentially the same entity (Great Britain: Monarchy -> Magna Carta -> Present).

The Artificial Intelligence comes from how they interact with each other, and how they change over time.  You could have all the above characteristics and still not be intelligent.  But given the above A.L. definitions, and looking at say Intel or Lockheed Martin, they are producing some very intricately designed goods, and they improve these goods over time, and change their direction based on competition, the environment (what people want), and levels of funding (feast or famine, active or hibernate).   More importantly I think, none of the individual actors could produce these goods on their own, or even fully explain them all at the level they were designed, which means that the organization is capable of more intelligent actions than any of it’s actors.

Seen as an entity in their own they appear as independent operators, and in fact the rules that internally govern organizations change over time as well, so in the same “life span” they are evolving.  The Ford today is far and away different from the Ford when it was established, yet it is the same entity.

That was a long way of getting to this, but what I meant by “process based AI” is that it is AI that is created from processes, such as legal arrangements (incorporation), and hiring contracts (to retain employees/actors), and departmental goals (to retain the properly directed and skilled actors), intra-company communication (so marketing can tell engineering what to build) and extra-company communication (sales, support, tax collectors).

In this way I think they are AI, that are essentially immortal, and min/max for their own survival, using people as tools.  They arent fully sentient (thinking about themselves thinking about themselves), but they act intelligently, and they will adapt to their environments (laying off employees, offshoaring) to improve their ability to live, and avoid ending their lives.

Very few organizations are ever shut down, that dont just run out of resources.  They have a sort of “instinct to live” in the same way that we do.  They have gating factors such as if a CEO is taking the company towards it’s destruction, the Board will meet and fire the CEO and replace them with a new CEO that they direct to not take the previous course.  Yahoo is a recent example of this happening several times in a row as the entity tries to adapt to it’s new environment.

Why operational systems and programs require testing, debugging and refactoring.

An operational environment, such as a web site and web applications with their corresponding data and infrastructure, has many similarities to the development of an executable program. Each of them has a collection of data, logic and resource requirements to perform a function, through a series of instructions, both of them process their execution requests against their data and run-time environments.

When an error occurs, an exception is thrown, sometimes loudly, sometimes it is quietly caught and ignored. Many classes of errors can’t be found until the program is run, because the compiler cannot test for many kinds of logic problems, or issues with the run-time environment, and parts of the program that are interpreted at run-time may have problems that cannot be tested in advance of being run due to the flexibility of interpretation at runtime.

Similarly, when initially configuring an operational system, some errors can be found on setting up the services and storage, when making sure all the pieces connect together. Like running a program and not seeing any errors on completion, the operational system can be tested for functionality while it is first put together.

However, those many classes of errors that can’t be found by the compiler may still exist in the areas of the code that have not yet been run, and as a general rule, they are always present.

In the areas of a program or a system that has not been tested, there are will be problems that halt the operation, and may or may not be immediately obvious what is wrong. Additionally, the fix for the problem may not be something that can be done without a number of changes being made to the program or system.

In the case of a program, this is expected. After writing a program, it is known to require a testing and debugging phase, and lately it has also become popular to refactor code so that the design stays robust and healthy, and does not become more entwined with coupled logic which becomes harder to change as it grows in scope and size.

Due to the similarities between a program and an operational system, operational systems also require testing, debugging and refactoring before they should be deployed from development to production. The stages of development, testing, debugging and refactoring have become common sense in the development community, but are not always seen as important in the operational community.

An excellent method in both communities to quickly achieve a result is to build a rapid prototype. This provides the creation of an functioning program or system, that can be proved to function, and that provides enormous insight into the requirements for integrating all the components together to create that program or system.

Additionally, rapid prototyping allows a working blueprint for how to transition to a production quality program or system. By having a functioning prototype, all the pieces can been seen functioning together. All the challenges of making the program or system work reliably will have experience to back up the initial plans and theories.

Once the rapid prototype is available, translating each of the visible requirements can be done rapidly, as new plans can be drawn up after reviewing the areas where the prototype’s design worked strongest and weakest.

Shortly thereafter, a tested, debugged and refactored program or system can be released with confidence that supporting it’s usage over time will be manageable because of the valuable insight the prototype provided and the robust improvements added by the production release.

After all, unlike a program which is simply stopped and started, an operational system is running 24/7. Upgrading an operational system while it is carrying traffic is significantly harder and slower, and so significantly more costly, than before it has started to carry traffic for perpetuity.

A short canary test of the prototype, to determine how well it functions under user traffic, can provide another level of additional insight, and can take place in parallel with preparing the production release.

[Originally written 01-24-2010, published in my Red Eye Monitor blog]

Operations is about responsibility, what is NoOps about?

Since the NoOps movement didn’t die out as soon as it started, as I hoped it would, I think it might be nice to illustrate why there will always be an Operations team.

Technology will constantly change, and new software is always written.  In the life of a company that provides services on the Internet, databases will need to be maintained, software will need to be deployed, configured and maintained, technical security policies will need to be configured, audited, reviewed and maintained, and so will networks and servers.

If the servers are rented virtual machines or owned physical machines, it makes no difference.  At some point maintenance and performance tuning configuration will need to be performed, after the initial configuration, as well as ensuring backups are made, and the variety of installations across different machines are kept up to date.

Someone has to do this work.  Who will it be?  It doesn’t matter.  Those people are your Operations team.

Do you want your developers to be your operations team, doing installation, configuration, maintenance, system testing, upgrades, and troubleshooting of things they didn’t themselves write?  Ok, you can do that.  Now they are Developers and Operations people.

However, we as an industry have specialized for a reason.  In the old days, all developers were also the operations people, because the systems were so primitive that the work between writing new software and installing, configuring and maintaining machines were essentially the same thing.

Over time, machines have become networked systems, and the amount of non-custom software to custom software ratio has changed quite a bit.  In the beginning, the user wrote all the software the machine ran.  Now, almost all the software running on any given machine was written outside the user’s organization, and the user’s organization writes a small fraction; just enough to get what custom functionality they need.

In the nonsense movement that is NoOps, the idea is that no one has to spend much time doing any work on all the software not written in-house.  You develop your custom applications, and then integrate them with all the pre-written OS and application software, and the developers do all the work, automating as they go along.

In reality, there is a trade off being made silently here.  Either you are accepting that developers do not become skilled Operations people themselves, creating NoOps, and then all decisions made are by definition naive decisions without experience or real-world information to back them up.

Or, you are deciding that developers must become skilled operations people who do not make naive decisions, because they become skilled Operations people themselves, switching from having a distinct pool of dedicated Operations people, to people who have to develop solutions, and then also do all the support any other Operations team member would have had to do.

Doesn’t automation solve this?

Only the easiest things are easily automated.  Automation is incredibly hard to do comprehensively, and you can see this clearly by looking over any of my work on how to comprehensively automate things, which I detail in other sections on this blog.

The NoOps movement is a naive idea hidden behind the incorrect assumption that developers can easily knock out comprehensive automation that allows them to not need to become experts or spend significant time on operations problems.  Even if their software and internet service operations are core to their business.

Since I worked for the organization that proposed this concept, briefly, I can see why it was created, because they have indeed turned all of their developers into Junior System Administrators, without a single Senior System Administrator in the entire organization.  It’s only natural in such a place to rationalize this disaster as being a good thing, and obviously the way forward for everyone else as well as themselves.  Unfortunately, their track record shows that they are not good at operations, or even doing basic availability, and there is a reason for that.

As the saying goes: “A man should always think about the source of the water as he drinks it.”

Who’s on-call?

Developers’ specialization is to write software, and maintain written software. Operations’ specialization is installing, configuring, maintaining, securing and adapting a given infrastructure to custom and non-custom solutions.

Who gets paged when it breaks? Operations.

If you get rid of Operations, NoOps, then developers get paged. Now developers not only specialize in writing and maintaining software, but all dealing with all infrastructure issues.

How is this efficient? How does it build deep knowledge about the specific operations of a given infrastructure?

How does it limit movement of developers across projects, if they also need to maintain a specific infrastructure they are familiar with? Or, should no one be familiar with any specific infrastructures, so no one has deep insights on it’s workings for when it has issues or needs to be adapted for a new solution?

Answering these questions yourself should give all the personal evidence you need to understand that the NoOps movement was not well thought out. It is inefficient, and if it works at all, it does so by ensuring that beginner-level decisions are made over and over, on critical infrastructure topics.

Comprehensive automation solutions making the implementation and control over changes in production infrastructure is something I, as an Operations person, care deeply about. The NoOps movement provides nothing towards this but a pipe dream built on irresponsible suggestions that will impact the efficiency and availability of any organization who naively accepts them as a goal.

Operations groups are about responsibility over infrastructure, like developers have a responsibility to write and maintain code. How the infrastructure responsibilities are divided up will be different at every organization, but unless an organization does not care about the results their operational infrastructure gives them, they need an Operations team to manage that infrastructure. Irregardless of what those team member’s other responsibilities are.

Floating Schema Schema

This is the eighth article in my series on creating comprehensive automation.  Click here to jump to the first article.

The Floating Schema System


The Floating Schema is a way to store over-arching schema information for all your database data, which can be used from any source, and can be implemented against many different platform and operating system level services.

Databases

A minimal description to specify that a database instance exists.

We give it a name, and a type, which can be as simple as a string such as “mysql” or a reference to a database type specification, which gives more precise and readily usable data.

Finally we give an URL, which can just by the type and hostname, such as “mysql://prodmysqldb/”, but could also include user, password and other parameters to pass to the database as URL arguments.

This is a very minimal representation, and should be expanded upon to suit your needs.

Database Schemas

Comparable to a table, but nothing is stored in this.  It is just a description of a container for specifying a collection of fields.

As such, we specify a unique id, the database this schema is in, the name of schema, any text info about the schema, and then access information by specified group or evaluated by a script.

Schema Field

Schemas are collections of fields, and in a field we specify it’s schema, it’s type, an optional custom formatting script, an optional custom default value script, and access information by specified group or evaluated by a script.

Version information could also be added to all of these fields, but was not to keep them simpler.

Schema Field Type

Field types provide a way to generalize how to validate, format, restrict access to and provide default values for schema fields.

What All This Provides

As you can see, creating a comprehensive automation system requires wrapping everything, including where our data sources are, what kind they are, and specifying what is in them.

The benefits received from this highly structured system is that APIs can be generated for scripts to use for doing any sort of reading and writing operations to our databases, and we can allow or deny queries based on authentication and access or data validation failures.

Display and interactive systems can be written against this data, and they will all be updated accordingly if they use the data and scripts referenced in the specification.  This provides a framework to write any level of tool against, and thus is a basis for a comprehensive life-cycle system.

Final Overview

Remember This?

From the Walkthrough of an MSGR System post, we have the screen shot of the MSGR application.

Take some time to compare the schema we have just created for the MSGR data, the CMS data and now our Database Schema data.

Change Management System Schema

This is the seventh article in my series on creating comprehensive automation.  Click here to jump to the first article.

The MSGR Change Management System Overview



The Change Record

The change record is at the heart of a Change Management System, as we are tracking changes.

We want to know just a few things about any given change:

  • Who made it?
  • When was it made?
  • Was it submitted or abandoned?  When?

From here we can attach additional records about the values being changed, and comments about this change.

The Change Value

This is the substance of what is being changed.  It references the change record, so we can group many value changes together.

To specify what is being changed we specify the database, the schema or table name, the record ID and the field.

This allows us to specify changes about any database in our system, and any table in that database, and any record in that table, and any field in that record.  We have a very flexible change management system.

Combine this what an authoritative database of all the databases in your system, and you can use a single instance of this CMS to track changes in any database, and across databases, in your system.  This provides a form of cross database transactional behavior.

Finally we store the value of what is being changed.  This can be type validated by looking at the description of the database schema field type specification, which I will cover in my next article.

The Comment

For a change management system to be of more use, it needs to be collaborative.  Other people should be able to look at the change, and then comment on whether or not it should proceed.  There may be many exchanges on things to change before it should be committed.

To facilitate this we have the change_comment, which references the change, and then provides a summary and body, and tracks whether this is a Review Request, Ship It, or Abandon It comment, and finally who wrote this comment and when.

Review Requests are sent whenever a change is ready to be reviewed by other people.  It is sent to a list of emails specified by the email_to field.

Ship It boolean flags tell the creator to commit this change.  Abandon It boolean flags tell the creator to abandon this change, because it is the wrong thing to do.  If it could be corrected into the right thing to do, then a series of body comments could lead to a final is_ship_it=true and the change can be committed.

Next Steps

Having created a MSGR schema allows us to track a hierarchy of all our services and hosts.  Having a CMS schema allows us to track changes, and then commit them all at once to their respective database tables, after a review process.

The next step is to make an over-arching schema that describes where all our data is, and how to validate, format it and restrict access to it.  For this I am creating my own database abstraction layer, which I am calling a Floating Schema, because it is used on top of other database schemas, and can be looked at in pieces and send references to other database specification sources.

Read the next article: Floating Schema Schema