Sunday, July 6, 2014

Definitive Module Pattern

When it comes to design patterns in javascript, the Module Pattern currently exists in two forms: (1) The Original Module Pattern, and (2) The Revealing Module Pattern. Today, I present a third version of the Module Pattern. What I call the "Definitive Module Pattern". https://github.com/tfmontague/definitive-module-pattern

Design Patterns

If you're still using the javascript prototype pattern, you're a little out-dated. Repetitively using "this" and "prototype" is not a great way to write javascript code.
function module() {
    this.public = function () {};
};
Several other Javascript design patterns exist (e.g. Module, Singleton, Flyweight, and more), and should be used where appropriate. Addy Osmani at Google has done a great job in classifying various Javascript design patterns (as documented in his book Learning Javascript Design Patterns). One design pattern not found in his book, however, is the "Definitive Module Pattern" - mainly because I just created it (or so I believe) about half-an-hour ago.

A common alternative to the prototype pattern, is the module pattern.

Module Pattern

var module = (function () {

    // private subroutines
    var private_one = function () {};
    var private_two = function () {};

    // public subroutines
    return {
        public_one: function () {
            private_one();
        },
        public_two: function () {
            private_two();
        }
    };

})();
However the original and revealing Module pattern, has some "code smell".

Definitive Module Pattern


The "Definitive Module Pattern" retains the advantages of the Module pattern (e.g. public and private scope), and the Revealing Module Pattern (i.e. not having to repetitively declare anonymous functions). Yet the pattern, offers the following advantages: decouples the Return statement from the "_public" subroutines, groups private and public subroutines into "_private" and "_public" object literals, and provides configurable public scope.
var module = (function () {

    // private subroutines
    var _private = {
        private_one: function () {},
        private_two: function () {}
    };

    // public subroutines
    var _public = {
        public_one: _private.private_one,
        public_two: _private.private_two
    };

    return _public;

})();


Tim Montague
7/6/2014

Monday, January 13, 2014

iOS armband

I've just thought of a great product. Which can probably be integrated into the alleged Apple Smart-watch.

Create an arm-band instead. It won't require the level of miniaturization needed for a watch, and the arm band can act as both a motion controller (like Myo) and a fitness tracker (like Jawbone or FitBit).

Couple that with Apple's iOS and you have an amazing new experience. Automate your home, track your health, and communicate with friends and family. Creating a single fluid experience is key.


Cheers,

Tim Montague

Thursday, December 12, 2013

The browser is your avatar

I first suggested that browsers and servers should be combined under one peer-to-peer architecture as a "Smart" browser about 4-5 years ago. And even created a framework for how it would work under a project named "Creed". I then presented the idea to Computer Scientists, and got no response. Typical.

However, this vision remains clear, with client-side applications becoming "rich" by adapting a server-side Model-View-Controller framework, they edge closer and closer to becoming servers. Backbone, Angular, Can, Ember, all resemble mini-servers. And when the Chrome Browser becomes more like a server - think back on this article.

The logic is simple. 

Why have a redundant MVC pattern on both the client and the server? 
All it takes from a software developer standpoint is the keyword "Private" to represent Server-Side code, and "Public" to represent Client-Side code. 

Why not have a single "Smart" browser that you can extend with server-side plugins? 
People tend to think of Client and Server as programs. But, really they are environments of distributed programs (plugins). A "Smart" browser should manage both the client and server environments.

The future of software, is a system where these Client/Server Browsers communicate with each other. This goes even further, the "Smart Browser" is not just a browser, it's a web operating system. And finally, the web operating system is not just an operating system - it's a representation of you!

Your personal avatar in the form of a Browser.
And Public means stuff you share with others, and Private means stuff you don't share.


12/12/2013
Tim Montague

Sunday, October 13, 2013

Workspace Groups

Workspaces serve a vital role in productivity, yet the design and functionality of this feature is drastically lacking on all operating systems.

It goes by various names depending on the operating system:

I'm in favor of the name Workspaces or WindowSpaces.

Most people when they use computers, perform all their work on a single desktop with all their windows open; that is what we call a workspace. However, having all your unrelated content open in dozens of windows on your single desktop workspace is dumb, as it reduces productivity.

Some try to remedy this problem by using dual monitors, purchasing 46'' screens, and sanctioning open windows into small clusters on their desktop - it's much easier and less expensive to use multiple workspaces on a single monitor.


A workspace should be dedicated to a group of tasks, or related windows. Image you have 4 workspaces. The 1st desktop workspace is for communication (Twitter, Facebook, Linked-In), the 2nd desktop workspace is for writing assignments (MSWord, Textmate, etc.), the 3rd desktop workspace is for computer systems (control panel, system settings, etc.), and the 4th desktop workspace is for graphics (Photoshop, Illustrator, AutoCad, etc.)

And while workspaces are great for productivity, they still lack major functionality:

Mission Control (by Apple)
- Workspaces can't be organized into rows and columns
- Workspaces can't be organized into rows and columns of various lengths
- Workspaces can't be label
- Workspace labels don't display at the top

Virtual Desktop (Windows supported)
- 3rd party support only
- Workspaces can't be organized into rows or columns of various lengths
- Workspaces can't be rearranged

Workspaces (by Ubuntu)
- Workspaces can't be organized into rows or columns of various lengths

If redesigned workspaces could be a revolution in the way that teams organize their workflow. For example, image that you have a single monitor, and you want to organize your windows and running tasks by department. You could organize your workspaces into rows. The first row is for your software department, the second row of workspaces is for your hardware department, and the third row is for communications.

Each square represents a desktop workspace. Related workspaces are grouped into rows.
Or you could organize the workspace groups by columns:


Each group of workspaces can be given desktop backgrounds of the same color. To switch between workspace groups organized into rows, one could use a hotkey (Ctrl + down, Ctrl + up). To switch between workspaces in the same group, one could use other hotkeys (Ctrl + left, Ctrl + right). When a user switches between workspaces (in the same group or in another group), a label at the top of the window should be displayed with the name of the workspace (and then disappear after 1.5 seconds). One should be able to move the workspace from one group to another, or rearrange the workspaces within each group. Furthermore, one should be able to drag open windows between various workspaces. Transition effects between workspaces should also be optional.

If anyone is aware of workspace tools that will allow me organize my workflow in this manner, then please email me: tfmontague@gmail.com. Most importantly the ability to organize workspaces into groups or rows or columns of various lengths.

Friday, August 16, 2013

Obvious products

Here are some obvious core products, that companies should have already engineered:


Email service by Linked-In: 
Advice: Follow Oracle, SAP, and Salesforce, and invest in cloud-based business software

Set-up a proper email service (call it LMail if you wish). If Linked-In created a proper email system (IMAP or POP3) like GMail, I would switch in two seconds. Gmail is for playing with family and friends. It's not for professional business people, who hate fun.

Linked-In's organization is horrible: photos and text are thrown about the site randomly. Stop treating it like a sales funnel - internally that works, externally it's awful.

There are two kinds of message services:
1. Personal - For those who like to connect with friends; best organized as a feed system with chat. Twitter / Facebook / Google
2. Professional - For those who connect with co-workers and professional networks; best organized as a traditional email system with photos. Skype / Linked-In

I want to use Linked-In like I use Google. Photos of the person when they email me, with access to their professional background and connections.


iCar by Apple:

Advice: Follow Tesla and Google, and invest in automobiles

Get some guts and go bold. Apple has enough cash to revive the City of Detroit, and could surely push Apple Computers into transportation. Knowing that they haven't done so yet, while Elon Musk did - seems a little ridiculous to me. iWatch? Give Steve Jobs his iCar already. Cars don't work so well with websites, but iOS apps and real-time services run great.


Public Wi-Fi Trash Cans by Google:
Advice: Partner with Starbucks, and invest in public utilities

Your strengths are public service and advertising. Give our communities free WiFi service built into public recycling bins (you know the green kind with the recycling logo) for our mobile phones. Send us ads from them, and increase the usage of the Google Search engine at the same time. Or offer it to Android phone users only - if you want to be evil.



Email me when the stagnation has ended.

Tim Montague
8-16-2013

The Evolution of Technology

This image reminds me of my posts on Voluntary Evolution and Conscious Technology.

Nivem.jpg

I'll reiterate..

Technology is evolving so that it can integrate closer with our biochemistry, as a way to gain voluntary control over our own evolutionary process and the environments in which we evolve.

Life evolves exponentially. Technology evolves exponentially as an extension of human thought. Furthermore, like endosymbiosis, emerging technologies reuse popular patterns from expiring technologies.

Tim Montague
8-16-2013


Tuesday, August 6, 2013

Structured arguments

The problem with unstructured arguments

In all programming languages (or at least most of them), arguments are passed to a method declaration via a method call.

  someMethod (x, y, z);

This requires that the method declaration on the other end accept the data-types as well.

  someMethod (x, y, z) {
    // This is the method code
  }

Now let's imagine, that you've called the method a few dozen times throughout your code:

  someMethod(x, y, z); // line 22
  someMethod(x, y, z); // line 246
  someMethod(x, y, z); // line 588
  someMethod(x, y, z); // line 612
  ...

That was 2 months ago (or maybe yesterday). And, now you'd like to re-factor the method; maybe, the second argument is no longer needed, maybe it's more logical if the arguments are arranged in a different order, or maybe you'd like to rename an argument... of course this means re-factoring your method declaration and method calls as well... Zzzzz

  someMethod (x, z, ab) {
    // This is the method code
  }

  someMethod(x, z, ab); // line 22
  someMethod(x, z, ab); // line 246
  someMethod(x, z, ab); // line 588
  someMethod(x, z, ab); // line 612
  ...

Here are your three options: (1) manually rename all arguments, (2) use the Find and Replace script built into most text editors, or (3) redesign the programming language. Today, we're going with the third option. 


Redesign the programming language

Lately, I've been been avoiding unstructured arguments all together, and now pass them as a single structured data-type (struct in C, object literal in Ruby). For example:

  someMethod (object) {
    // This is the method code
  }

This way, I don't need to re-factor the method declaration or the method calls. The method call should look like the following:

  struct object {
    type key;
    type key;
    ...
  }

  // ~ OR ~

     object = {
    key: value,
    key: value
    ...
  };

  // Pass object to method call
  someMethod (object);


In dynamic languages (like Javascript), functions can also be passed as an arguments. Typically, I add those callback functions directly to the object as well.

In the end we're faced with a final question...
why doesn't the programming language do this for me?

All programming languages lack this fundamental design. Arguments should be implicit to the method. It's unnecessary to repeat the information both in the method declaration and the method call, because the method should only accept a single structured data-type that holds all the arguments.

  // Method Declaration

  someMethod (x, y, z) {     // non-ideal
    print x;
    print y;
  }

  someMethod {               // ideal - Any arguments?
    print arg.x;
    print arg.y;
  }

But, wait... couldn't this allow developers to pass arguments to the method that might never be accessed? Yes. But, methods that accept unstructured arguments "suffer" from this issue as well. It's the role of the compiler and the software developer to check for vestigial arguments.

In conclusion, method declarations should not list the arguments passed by the method call, these arguments should be passed implicitly via a structured argument.


Timothy Franklin Montague
8-6-2013

Tuesday, July 16, 2013

Google Glass by Motion

I wonder what it would be like to combine Google Glass with the Leap Motion Controller?

The "Google Glass" glasses could launch apps on voice activation (as I believe they do now), with some programs responding to motions as captured by a motion controller mounted on the bridge of the glasses.

For example:

By voice activation a user could launch a "Piano game" from the glasses, and then play the keys on any surface as detected by the motion controller.

Tim Montague
7-16-2013

Monday, March 18, 2013

Cubicle Car

With the advent of Google's driver-less car, being led by Sebastian Thrun at Stanford - it pushes me to consider a world of mobile offices. This article explains my concept of the mobile office.

For those who are unaware, Google is attempting to build a car that can drive itself. In conjunction with Google Maps it's a perfect complement to their product line as they guide the company towards useful hardware. The concept and engineering of self-driving cars is nothing new, but it hasn't reached mass market. In fact, before the death of Steve Job, he dreamed of Apple building an iCar.

But, the value I foresee in the driver-less car is the mobile office. Imagine, that your phone, fax-machine, printer, and computer with email and chat - were all built into the driver-less car.

Microsoft is well known for their office products. And, I predict that if Microsoft offered a mobile office they might achieve the same success that Apple has had with their mobile phones.

No more driving to meet prospective business-partners. The driver-less car can take you to all your appointments while you work! Ready for your appoint? Just step out of the vehicle, or meet inside.

And for those concerned about sitting for long periods of time, the mobile office could function more like a segway and allow people to stand-up. Add a treadmill to the floor and you could jog and work while the mobile office drives you to your destination.


Cheers,

Tim Montague
3/18/2013

Sunday, March 10, 2013

ThisAndThat - Internet of Things

Follow along with me, this is quite clever.

If This Then That (https://ifttt.com), is a little rule aimed at the internet of things. In Javascript we like doing things a bit faster. In fact there's a shortcut operator for #IFTTT. It's called #ThisAndThat

    // If This is true, then execute the function That()

    If (This) { 
        That( );
    }


Using the shortcut AND operator in Javascript, we can rewrite the above code like so...

    This && That( );

So, next time you need to access the internet of things use a little bit of... 
This&&That... (pun intended).


Cheers,

Tim Montague
3/10/2013

Monday, February 25, 2013

Web OS Tags

Wouldn't it be nice if you could organize your computer files around topics? The old method is placing all your files into a single folder; but tags allow users to sort files into multiple folders at once. This could be a perfect addition to Web OS (a.k.a. cloud-based operating systems) -- social media already does this -- it seems natural to sort computer files around topics.

For each hash-tag it could generate a folder.

It appears that Apple is going to run the first successful cloud-based operating system with iCloud. Google Drive might also evolve into a cloud-based operating system, or maybe Chrome will be ported.

Monday, February 18, 2013

Hashtags for Bots

If you haven't yet heard about the "Internet of Things" (IoT), then I'm here to inform you that it will be epic. But, that isn't the reason for this post.

What I'd like to discuss is how the "Internet of Things" will shape social media.

In 2007, Chris Messina proposed the hashtag for topics on Twitter as used previously on IRC boards. Which has seen broad adoption by Twitter users (as well as those on Instagram, Google+, etc).

The two predominate prefix tags used in micro-blogging:

1.) AT tags for users -- @MichealScott
2.) HASH tags for topics -- #InternetOfThings

Today, I'm here to pledge a brand new way of thinking about social media; one based around internet-enabled devices. Lately, the internet has been moving off IPv4 web addresses and into IPv6 web addresses. Why? Because IPv6 can provide a lot more web addresses for everyday objects. Everything is going online. Your stove, refrigerator, lamps, speakers, fans -- everything.

Here are my proposals:
The @ tag is for users. The @@ tag is for devices (or objects).
The # tag is for topics. The ## tag is for bot generated topics (if practical).

Some example uses:
@@iPhone, @@WeMo, @@Hue, @@ArduinoUno, @@ Refrigerator

I think the double prefix is appropriate because as devices advance they could become more like users. A single tag could be used for humans, and a double tag could be used for machines. Then if needed some devices could adopt the single use case - as they became more android.


Cheers,

Tim Montague
2/19/2013



Saturday, December 29, 2012

Lemming Terminal

Often times, nested routines tack on layers of terminals that close statement blocks. These terminals, while important, retard the development process. This article suggests the "Lemming Terminal" lexicon to close a group of nested routines.

In languages like javascript where nested callbacks can become quite involved (noticeable to anyone who uses jQuery), closing nested routines becomes a real pain (PITA) and a waste of time.


JAVASCRIPT
exports.findAll = function (req, res) {
  db.collection('wines', function (err, collection) {
  collection.find().toArray(function (err, items) {
  res.send(items);
  });
  });
});

And, then we have other languages like Ruby that terminate blocks with the "end" statement.

RUBY
class ArrayMine < Array
        # Build a string from this array, formatting each entry

        # then joining them together.
        def join( sep = $,, format = "%s" )
                collect do |item|
                       sprintf( format, item )
                end.join( sep )
        end
end


The Python language attempts to solve this problem by using tab characters. However, this causes other problems: [1] different text editors will read tab characters differently; sometimes tabs are converted into spaces, or may be read as 4 to 8 spaces; and [2] scripts can't be minimized from white spaces without turning the program into a trembling mess.

PYTHON
def dosomething(callback): 
        size, reportSize = 20000, 1000 
        callback("begin processing {0} items".format(size)) 
        for i in range(size): 
                if i % reportSize==0: 
                        callback("{0} items processed".format(i))


Terminals are important because they allow us to avoid tabs and newline characters. Which, is important for serialization and compacting code. It allows an entire program to be written on one line if the programmer so chooses. Javascript, C, and Java - are great because the provide well structured programs, but terminal maintenance is a waste of a programmers time.

The alternative is to use my new lexicon called the "Lemming Terminal" (;;) which terminates all open nested blocks. Here we apply the "Lemming Terminal" to our Javascript and Ruby codes from above:

//Javascript
exports.findAll = function (req, res) {

        db.collection('wines', function (err, collection) {
                collection.find().toArray(function (err, items) {
                        res.send(items);
;;





# Ruby
class ArrayMine < Array
        # Build a string from this array, formatting each entry
        # then joining them together.
        def join( sep = $,, format = "%s" )
                collect do |item|
                       sprintf( format, item )
;; (or perhaps an 'endall' statement would be more appropriate for Ruby)
Tim Montague 12-29-2012

Wednesday, December 26, 2012

Happs (hardware apps)

Apple created the App Store in 2008: an online store where one can purchase applications for their mobile phone. Now in 2013, a number of competitors exist - Google, Amazon, Windows, etc. But, honestly app stores can do more. Apps can do more.

The internet of things, is the idea that the internet can be extended to everyday devices. Imagine that your dinner set - fork, spoon, knife, napkin, plate - can be manipulated with software applications. As you eat a meal, information about the meal is uploaded to the web.

Eating soup with a spoon? Let's stream the temperature, salt concentration, liquid density, saturation of oils, and more to our personal web profiles.

App stores can do more. Apps are extensively used to render graphics and perform calculations on operating systems. That's mundane. Let's create apps for utensils, cars, containers, and more. Let's expand the app store for the internet of things; for that truly is the future of mobile.


Timothy Montague
12-26-2012

Saturday, December 8, 2012

Conscious Technology

This post extends my prior posts - Voluntary Evolution and Thought-controlled STMs

Endosymbiotic theory suggests that mitochondria, and other organelles, were once separate single-cell organisms that converged with eukaryotes to form an integrated biological system. As molecular technology advances (i.e. nanotechnology and picotechnology) it will also become integrated with our biological systems.

It is quite clear that both life and technology advance exponentially - this connection seems to suggest that technology is co-evolving with life (Ray Kurzweil and Kevin Kelly).

Evolution has taken inanimate matter, and organized it, in such a fashion, as to create an automated system - known as life. Out of automation, our biological systems continue to function and evolve non-voluntarily.

For instance, it's not my choice in how much insulin my body produces. Furthermore, my body doesn't offer voluntary control over insulin production. Instead, humans (such as diabetics) depend on technology to regulate insulin levels.

It was out of consciousness that humans built technology. In the 20th century technology really started to change with the advent of autonomous technology. First in 1913 with the implementation of the assembly line by Henry Ford, and then again with the personal computer in the 1980's by Steve Jobs and Bill Gates.

But why do we build technology?

It almost appears as if evolution follows a pattern:
(1) Convert inanimate matter into involuntary automated systems
(2) Convert involuntary automated systems into voluntary (conscious) automated systems

The next step, for human evolution is to integrate technology into our own biological systems as a way to gain voluntary control over our biochemistry. We are using technology to evolve towards a purely voluntary system, and away from an involuntary system.



Timothy Montague
12-8-2012

Sunday, October 14, 2012

Javascript: The Next Frontier

The future of javascript programming is obvious. It is becoming a structured language that will replace Objective C and Android as the operating system for mobile devices.

Google began development on Dart a year ago, Microsoft just released Typescript, and Mozilla is working on ECMAScript Harmony. So where is Apple in all this? Is Tim Cook just sitting on his hands while the competitors build the next generation of mobile?

It seems that only Google has a clear plan with their Dart VM that could be run on smart-phones and tablets. But will Google's Dart fall by the wayside of Go programming?

There are a lot of unanswered questions. But I'm sure that these questions will be unraveled in the next 5 years as WebOS continues to mature.

Wednesday, September 5, 2012

Say Hello To My Lawyer; Siri.

If you have ever used Siri with the iPhone 4S, or any of the other personal assistant software (e.g. Vlingo, etc.) then you are probably aware of it's AI abilities; in particular the software has the ability to interpret "what you mean".

Sure, browsing the internet and scheduling appointments are all great. But what if I want something more from my voice-driven "personal assistant"?

Siri could you stop being my personal secretary and instead act as my contract lawyer?

By partnering with Legal Zoom and Nolo, voice-to-text software could be used to draft legal documents. This indubitably seems like it would be a much larger consumer market than task scheduling and browser searching (that technology is as old as PDAs and Altavista).

Here is an example use of the technology:

Siri could you please write a contract that protects me against my soon-to-be wife from taking everything I own in a divorce?

Siri then navigates the user through the process of drafting a prenuptial agreement and writes the contract.

Thursday, August 9, 2012

Voluntary Evolution | Thought Controlled STMs

Humans are co-evolving into lifeforms that can voluntarily and dynamically control their own genetics (and the genetics of others). Kurzweil's belief in technological singularity is a subset of a larger picture. The true awesomeness of emerging technologies and Moore's Law, while rooted in processing power, will be the ability for humans to control their biochemistry with thought-controlled nanotechnology.

A day will arrive when we can directly reshape the nano-scale world based on thought. Which will allow us to change the color of our eyes, search online networks (cognitive search engines), and manipulate our genetics; all directly and instantaneously via thought.

If you believe this is absurd, then ponder the following pieces of information:

1.) There are already devices on the market which can read brainwaves. InteraXon creates brainwave devices and focuses on thought-controlled computing. Honda has been testing thought-controlled robotics.

2.) A scanning tunneling microscope (STM) can move atoms. In 1989, IBM accomplished a revolutionary feat by spelling the acronym "IBM" with Xeon atoms.

So what happens when we combine (1) thought-controlled computing, with (2) instruments that can manipulate atoms? What happens when the technologies developed by Honda merge with the technologies developed at IBM?

Technology will become embedded deep within our biological systems; everything from our neurology to our genetics will be accessible with molecular nanotechnology and controllable by thought. Giving us the ability as a population to alter any individuals genotype and phenotype based on thought-alone.

Essentially, the ultimate drive for humanity is the development of STMs which are thought-controlled and built on organic electronics. And just as cell phones have become smarter and more portable, so will the STMs and AFMs.

Timothy Montague has a Bachelor's of Science in Biochemistry from the University of Santa Barbara, California. He has worked as an undergraduate researcher for the Jager group and the Wudl group at the California Nanosystems Institute.

Friday, June 22, 2012

WebOS


This post is not so far forward thinking as my other posts. It builds on practical ideas that are already in development.

In the next 5 years (near future, don't hold me to the exact date), we will have a converging web operating system that will allow users to drag-n-drop apps from one website to another. This exchange of drag-n-drop apps will allow developers to build websites in a sense, but this won't be any different from customizing your operating system. We won't really be building websites, we will be exchanging web applications. Some of these applications will be commercial in nature and others will be free.

However, the real success of the near future internet will be the pooling of common resources, that are accessible to all web applications. As virtual operating systems share common resources so will web operating systems. At the backbone of a modern web operating system will be (or should be) the organization of data in open online databases that are freely accessible via a REST api.

In this new WebOS web-app world - Facebook, Twitter, Apple, and Google will be dominate players. If Yahoo! doesn't close their business, they could have a lot of success in the WebOS app market.

Google has tried to implement a Google operating system on chrome notebooks. Google alone will probably not be efficient as a uni-polar power to implement a global adaption of WebOS. Microsoft could have had this successor operating system, but they threw in the towel a long-time ago. The new operating system is built on web tools; an area where Microsoft hasn't adapted well.  Currently, Apple is best adapted to lead this change to WebOS - due to their operating system, app market, and recent focus on what Google does. However, Apple focuses more on design and user-experience and less on standards. The real value, however, as I mentioned is in the commonly shared web databases, an area that is best designated for Oracle and perhaps IBM.

I am not sure how this will all play out; just a best-guess analysis from my current perceptions. In the next few years the corporate playing field might completely change; but, the facets suggest that the internet is become a WebOS.

6/22/2012 - Timothy F. Montague

Saturday, April 14, 2012

Voluntary Evolution


Since 1850, Charles Darwin's scientific publishing’s on vestiges and natural selection has stirred-up a lot of controversy. And, while Darwin's theories pertain to naturalism and organic species, researchers continue to edge closer to the borders that separate artificial and natural.

Classical selection, as presented in Biology textbooks, is divided into artificial and natural. However, I see artificial selection as a misnomer. How can we state that humankind evolved and still not acknowledge that our industrialization and technological advancements are not extensions of that evolution? How can we alter the natural environment from which we evolved and label it as an artificial selection process? Biomimetics and molecular nanotechnology are not any more artificial than our own biochemistry. What is termed "artificial" is not artificial at all. Why would it be any different for our cells to engineer proteins and biochemicals from raw materials absorbed by our intestines, than for humankind to engineer molecular systems?

The harsh reality is that humans and molecular technology are co-evolving. Endosymbiotic theory suggests that mitochondria, and other organelles, were once separate single-cell organism that co-evolved with eukaryotes and eventually became integrated biological systems. Molecular technology will also become integrated with biological systems as we continue to evolve.

The harsher reality is that the human population engineers their own selection process. If we want to live longer we extend the length of telomeres, if we want to cure blindness we undergo corrective lens surgery. We are constantly manipulating our genetic code, our environment, and our evolution.

Where is this leading?

As technology becomes more intricate (such as nanotechnology and picotechnology) we will be able to voluntarily control our own genotypes, phenotypes, bulk materials, and our own evolution.

Three postulates on advancing technology


(1) First-world countries will advance technology more rapidly (as a population) than poorer nations with limited resources.
(2) The development of technology occurs exponentially (log e) as an extension of human evolution.
(3) Humankind will integrate organic technology into our own biological systems and this will become the evolution process of post-humans.

Evolution of post-humans


Our post-human population should strive for the ultimate benchmark which is the manipulation of materials by thought-controlled devices. This is not as far-fetched as it sounds. On March 31, 2009 - Ian Rowley wrote an article in Businessweek about how Honda Corporation has programmed Asimo (their humanoid robot) to respond to human thought [1]. Therefore, I propose that post-humans will be somewhat like shape-shifters who can control their surrounding physical environment by thought-controlled nanotechnology. This will affect all industries: we will be able to write computer programs, change the color of our wallpaper, and genetically-modify our own chromosomes via thought.

Conclusion


In conclusion, technology is an extension of human evolution that is constrained by Charles Darwin's ideas of selection. I view all human technologies as a part of the evolution process. While, there are no lineages on the evolutionary tree that stem from homo sapiens, there will be an era when biologist must face the controversy between artificially designed organisms that have reached singularity and those produced by mother nature.

Timothy Franklin Montague
5/09/2011

[1] Rowley, Ian. From Honda, a Mind-Reading Robot. March 31, 2009. BusinessWeek.
http://www.businessweek.com/globalbiz/content/mar2009/gb20090331_865756.htm

Friday, April 13, 2012

Cognitive software

In the future, software programs will be executed by thought. Software that we can rewrite and re-map to our thought patterns. Today the computer keyboard is merely a barrier between our mind and the CPU, through which we combine keywords (and symbols) to write software. One day, we will map character-sets (e.g. ASCII or Unicode) to our cognition-patterns. Instead of typing out software, we will think out software!

Sunday, April 1, 2012

Cognitive Internet

This post further details my concept of Voluntary Evolution.

Evolution is not indicative to the individual; we evolve as a species and the internet is apart of that evolution. Today, Google searches can be conducted by typing-in keywords or by speaking keywords (2012). One day, Google searches will be conducted by thinking-in keywords.

The greatest advancement in technology over the past 30 years, has been the globalization of computer software and hardware. It comes about in the form of mobile technologies. And, in the future, nanotechnology will naturally become embedded into our biological systems and the environment around us.

This cognitive internet will allow us to remotely perform in-vivo genetic modifications, synthesize bio-molecular structures, and regulate our natural environments on the nano-scale.

- Timothy Montague April 1, 2012


Figure 1. Honda Corporation is working on the technologies of tomorrow. Above, is a wearable device that maps brain patterns as the user thinks of particular words. These words can be use to drive internet searches through websites like Google.

Saturday, March 31, 2012

When Statements

Switch statements and if-else statements really accomplish the same thing in a programming language; and to be honest the trouble lies on the if-else statements (dangling modifier problem). And, while the Switch statement can handle multiple conditions, it effectively lacks the ability to evaluate them. My solution to both these problems is a new keyword called the When statement**.
if ( $time == 3 ) { print("3pm"); }
else if ( $time > 3 ) { print("After 3pm"); }
else { print("Anytime but 3pm"); }

switch ($time) {
  case 3: print("3pm"); break;
  default: print("Anytime but 3pm");
}
when($time) {
  3: print("3pm");
  >3: print("After 3pm");
  else: print("Anytime but 3pm");
}
The Switch statement can't evaluate conditions; so, the expression >3 (greater than 3) is not possible. The When statement can handle multiple conditions like the Switch statement, and it can evaluate conditions like the If-else statement. This effectively allows the When statement to replace both the If-else statement and the Switch statement in traditional programming languages. Furthermore, this produces less overhead and better organization of logic blocks.

Additionally, the When statement can evaluate multiple parameters.
when ($time, $space){
  $time > 3: print("It's afternoon");
  $space != $time: print("space and time are not equivalent");
  else: print("This is the default option");
}

** Could also be called the monty_switch in reference to my last name.

Sunday, January 22, 2012

Junk DNA: it's a salt hash.

I woke-up this morning with the idea that "junk DNA" is not really junk at all. About 98% of DNA is non-coding, and therefore it does not directly translate into proteins. During the transcription process (from DNA to mRNA), introns are spliced out of the precursor mRNA sequence by a spliceosome complex to produce the mature mRNA strand used in protein translation.

National Public Radio (NPR) recently featured an article on "junk" DNA [1], explaining that Scientists have shown that RNA introns play an important role in gene regulation.

However, the idea that occurred to me this morning is that a portion of "junk" DNA might actually be a salt hash. Often in database security a salt is added to a string before it is hashed. Let's say that you want to protect a password in a database. First you salt the password with some extra information, and then you encrypt the sequence with a hash function.

For example:
password = "cat";
salt = "tatatata";
salted_password = password + salt; // "cattatatata"

// encrypt the salted password
encrypted_password = hash(salted_password);

Having a specific spliceosome complex that decrypts the non-coding "junk" may be an important security feature against foreign viral ribozymes. If a virus was able to read DNA and directly produce mRNA without any form of security, then it could essentially ransack cell metabolism. Perhaps, some of the "junk" DNA / pre-cursor mRNA acts as a hash salt for the mature mRNA strand, where the spliceosome complex is the hash function. The pre-cursor mRNA would therefore be a salted password.

// salted password
pre_mRNA_strand = "cattatatata";

// decrypt the salted password
mRNA_strand = spliceosomes(pre_mRNA_strand); // "cat"



Wednesday, January 11, 2012

A better markup language

I don't mean any disregard to Tim Berners-Lee, but I strongly dislike HTML. It's a bulky language that utilizes more bandwidth then necessary, and it's annoyingly close to object-oriented programming. With a few slight modifications developers could really merge it into existing languages; something like JSON.

This is what I have came up with recently:

<div id(myid):
Need a light-weight substitute for HTML? <a href(new.html): Click Here >!
>


Compare to the HTML version:

<div id="myid">
Need a light-weight substitute for HTML? <a href="new.html"> Click Here </a>!
</div>


Notice that [1.] parentheses (myId) are used instead of ="myId", [2.] elements are contained entirely in a single set of tags < > , unlike HTML that uses the < element > to open and the </ element > to close, and finally [3] text is separated from code with a colon.

This alternative HTML reduces bandwidth strain and the amount of time spent writing markup. However, this is not quite enough. It would be much better if the markup followed conventional object-oriented practices; elements are objects after all.
I haven't figured this part out yet; when, I do figure it out - I will make an update to this article. Maybe, we could even build an parser for the alternative HTML language.


Monday, September 19, 2011

A functioning force is always on course

I believe that the universe is perfect in design. That every detail and each action (for better or worse) is predetermined; and if the universe did not function perfectly it would never have existed. It reminds me of a phrase that my uncle recites, "a functioning force is always on course".

Some may see this as God's design or as fate. I believe that if God does exist - he/she is apart of all of us; everywhere and everything. That God is not good or bad, and falls outside the morals of society.

Furthermore, I do not believe in good and evil; I believe that morals exist in society. I believe that morals are important to society. That morals are predetermined and needed for existence - but obviously they do not concern inanimate objects (i.e. a rock does not experience good and bad).

Typically, a religious follower loses their faith in God (or Gods), because something happens to them that they can't forgive. They ask, "If God is good, then why do bad things happen?". I believe that "bad" things happen because it triggers other sequences of events that keep our universe in existence.

I do not mean to imply that if we do the wrong thing, or say the wrong thing, that it will stop the system from functioning; I am saying that the system (the universe) would have never began without being perfect, conserved, and predetermined in design.

These are my thoughts - feel free to believe what you like.


Timothy Montague
10-19-2011


Thursday, April 1, 2010

Google Cell Biology? (RCSB Protein Databank in Google Earth)

Lately, I have been impressed with Google Earth and Maps and have tinkered with some of the JavaScript examples provided on the Google Earth tutorial page.

I must admit that I have been harboring the idea to design a 3D environment which allows biochemists and cellular biologists the ability to navigate the interior of a theoretical cell.

Image traveling around in a cell! A complete cellular spatial layout - traveling from the nuclear envelope to the mitochondria - and having the ability to witness CGI trajectories of enzyme mechanisms and movements. Visually experience the fluid movements of the bi-lipid membrane layer.

How could something like this come to fruition?

It seems that a plausible bridge exists between the information technologies surrounding Google Earth and the bountiful supply of RCSB Protein Data Base files. A software project like this is much larger than a small group of scientists and programmers can handle; however, a vast implementation, perhaps acknowledged by Google, could have have a pivotal advantage for future engineering of "artificial" macromolecular systems.

Looks like I'll be spending the next few days trying to load PDB files into Google Earth.

Timothy Franklin Montague
4/1/2010

UPDATE:

I have made some progress on importing PDB files into Google Earth!
By this path I was able to convert PDB models into KMZ models:
PDB -> VRML -> 3DS -> KMZ

(1) Using Chimera convert a PDB into VRML
(2) Using Blender (or 3DS Max) convert a VRML into 3DS format
(3) Using Google SketchUp convert 3DS into KMZ format

Check out the following images!


Chimera
PDB: 3GH3


PDB file in Google SketchUp


PDB file in Google Earth!

I will continue to update this post, when I expand this project. Enjoy!

Timothy Franklin Montague
4/12/2010