TinMan Technology - How it works
Our PC based technology is comprised of both a development environment and a runtime environment. Simply stated, the development environment is used to construct, simulate and test an AI system, and the runtime environment is used to utilize that AI system from within a host application. TinMan Systems AI Builder IDE (Integrated Development Environment) provides a set of computational components that are used in visually constructing an aggregate computational system that meets the unique needs of the end user. The host application is typically developed by the end user, but TinMan Systems provides professional services to assist with development of the host application and/or 'prototype applications'.

See a full view of the entire platform here.  or
view a LinkedIn post about how each of the key components of TinMan's platform comes together to provide "Real-time Intelligent Systems Design"

The process of building and deploying an AI system with TinMan AI Builder is comprised of four steps:


  1. Construct the AI system visually in the IDE,
  2. Test and Simulate the AI system to refine its functionality in the IDE,
  3. Export the AI system from the IDE for deployment with your application, and
  4. Integrate the exported AI system using the runtime API (Application Programming Interface) into your application (host application).



Illustration: The Four Steps to Building and Deploying an AI System


The first phase, or design phase, involves the selection, customization and attachment of a set of computational components to achieve the desired overall calculation / computation. Each of the 85 computational components can be added any number of times to the AI system, in series or in parallel, and each customized to suit the form of data processing necessary. The current set of computational components include abstracted mathematical and trigonometric components, pattern classification / recognition (including self-generating artificial neural networks), pattern matching, statistical and vector based algorithms. Each component has a distinct symbol and visual shape with default and custom dynamic inputs and outputs for intuitive representation of function and purpose - labeled verbosely along with the current computed value.


Connections are made at design time by dragging from component input to output and vise-versa. Any circular dependencies, feedback loops and recursive pathways are automatically addressed by the IDE and its underlying logic execution engine. Components are added to system modules, which can also be added without limit to an AI system. Modules can be added and connected to function in series and/or parallel.


Because each of these modules contains its own set of interconnected components, it is considered a complete sub-system, which can be exported for re-use in other AI systems, or even in the same AI system, if redundancy or replication of a set of processes is desired. As an example, if an AI system needed to process sensor information coming from each of the wheels on a 4-wheeled robotics device, a single module could be designed to support wheel sensors, and then that module would simply be replicated (copied) and applied on a one to one basis to each of the 4 sets of wheel sensors. These modules could then report their information to a second level process that interprets synchronicity and/or other feedback from the 4 wheels.


During the test and simulate phase, random data and/or data read directly from spreadsheets can be utilized to ensure functional accuracy, and structural efficiency. Once exported, the runtime API is used to load the AI system, populate its inputs, and execute cycles of logic. Ultimately, external data from the host environment is fed to (actually subscribed to) by the modules within the application, and the external outputs are the result of the AI system computation. At runtime, these outputs (system outputs) expose the resulting values from each cycle of execution. It is then up to the host environment to determine what to do with or about that data.


How it Works

Once an AI System has been constructed, it is exported to a file for integration with the host application. At runtime, the host application will collect information from its sensors and/or human interface components and present the desired set of data to the AI system for computation. The AI system performs its computation, exactly as it was designed to do in the IDE, and updates the values of all of its exposed outputs. The host application would then read these values (or value), and respond appropriately. Programming that response is the task of the host application developer, such as visual display of the information, authentication, classification or identification of a subject (as in a biometric application), movement or articulation of a robotic joint, presentation of a word or sentence, prediction of an event, or update of various sensitivity levels of the sensors themselves. The key point is that the AI system is presented information, processes information and presents the computed results to the host application.

AI Builder Typical Artificial Intelligence Application Environment
Illustration: Integrated AI System Processing Information within Host Application

The complete process of updating the input values, to the computation of the system, to the presentment of the output values, is considered a single 'cycle of execution or logic'. In some host environment configurations, this cycle of execution might be triggered manually by the press of a button or selection of a menu item (e.g. a medical application that performs a diagnosis based on inputs regarding patient history and symptoms). In other configurations, the computation might be done continuously, many times per second (e.g. a robotics device which maintains balance based on continuous information from its gyro and pressure sensors 20-100 times per second). In either case, computation is the production of output information from the processing of input data.

The runtime engine is made available in the form of a Windows Dynamic Link Library (DLL), and is accessed and operated by the host application via straight-forward C API calls. Other forms are available. At runtime, the host application typically would perform 5 operations via this API: one-time load of the desired exported AI system file, a setting of the input values prior to execution, execution, extracting of resulting output values and finally a one-time unload of the exported AI system file. The setting and reading of input and output values can be optimized by utilizing the interface pointers to actual internal AI inputs and outputs.

Platform Support

TinMan Systems technology foundation is currently offered in both PC and web-based platforms. Our PC based product is AI Builder. Our web-based platform is primarily provided via professional services. Our in-house tools enable us to build up a web-based AI system for hosting and access via web-services on TinMan web servers, or depending on project type, we can install the functional code on designated customer servers.

Some Example Host Implementations of AI Systems

Medical Diagnostics
Medical diagnostics application s can leverage AI systems through classification of prior known conditions, prediction of future potential symptoms, prediction of most likely diagnostic based on medical history and symptoms, etc..., These classifications can be made by providing a trained AI system with all known factors and/or medical history and allow the AI system to facilitate the diagnostics process with probable results. These AI systems can continue to become better and better at diagnostics with continued updates of known outcomes.
Biometric Pattern Recognition
Image, voice, fingerprint, palm print, face and other biological attributes can be provided to an AI system that takes this digital signature to match against previously observed (known) signatures to establish identity or classification. This matching is usually done after pre-processing steps, and can be performed in many ways - ranging from simple Euclidian distance based assessment to Markov Chains to Neural Networks.
Robotics Devices
Multiple physical sensors can feed information directly to an AI system that processes and acts on this data. External effectors and actuators can reflect these decisions in their speed, position, orientation, angle, etc..,
Computer Game Programming
Non-Player Character (NPC) behaviors can be managed through state transitions. Decision and action selection can be made through specifically designed neural networks that take into account environment, resource and player threat factors.
Threat Analysis and Prediction
Continuous monitoring of multiple conditions and tracking sequences of events can lead to threat level attribution and early warning systems.
Industrial Automation
Automation of mechanical arms, holders, flow control, etc.., can be achieved through continual monitoring and response to system and resource sensors.
Human Condition Monitoring and Assessment
The use of on-body sensors is becoming wide-spread in applications used by mobile device extensions as well as physical sensors within clothing and vehicles provides trained AI systems the information to assess and report on biological condition.
Automated Stock Trading
For quite a long time, automated trading systems have been used to monitor and act on stock information by using artificial intelligence systems trained with historic trends and market data. These predictions can continuously be honed and improved by circular feeds of response and effectiveness of the recommended actions and their results. Artificial Neural Networks, Bayesian Analysis, Fuzzy Logic and Hidden Markov Chains are commonly deployed in these AI applications.