Many core Tcl/Tk commands use named, optional arguments:
glob -nocomplain -type f -tails -directory $spooldir *@* lsort -index 0 -stride 2 -dictionary $search_counts entry .pw -show * -textvariable pw -width 15
But no support for this pattern is provided by the argument parsing of [proc]-defined commands, leading to horrors like:
searchdb "" $ss 0 notice "" "" $style $db $maxresults "" "" "" 1 "" \ 0 progresults "" "" "" "" 0 "" "" $userid $newtestrate 0 "" "" $website
parseargs is a C extension using a custom TclObjType to provide core-like argument parsing at speeds comparable to [proc] argument handling, in a terse and self-documenting way.
Tcl is great at creating prototypes that work and industrial-style applications.
What does it take to make a commercial product? Licenses, registration, upgrades, code obfuscation, websites, publicity and more.
Clif Flynt will discuss his experiences with taking a project from cool-idea to commercial release.
This talks reminds the audience of the existence of the the C Runtime In Tcl, a framework to embed C code within Tcl, and of the developement it has undergone in the past years, i.e. new features in its core, and supporting packages added to it.
This talk will discuss hyperfeed, FlightAware's core flight data processing engine written in Tcl. Developed incrementally over the course of a decade, hyperfeed is responsible for ingesting all of FlightAware's 40+ data feeds, aggregating the data together, resolving inconsistencies, filling in gaps, detecting and filtering out unreliable or bad data, and producing a single data feed that represents an all-encompassing coherent view of worldwide flight traffic as understood by FlightAware. These results are visible to millions of users through the website, mobile apps, and various APIs.
Hyperfeed is a high-performance, highly parallel and concurrent system that can easily saturate 32+ cores of a modern high-end machine, and it's all written in Tcl. It can run for weeks or months at a time and is only restarted to pick up software updates. It consists of a dispatcher process which routes incoming messages to child interpreters, which do work in parallel while sharing data and communicating state when necessary through a centralized PostgreSQL data store. Read-committed transactional semantics from PostgreSQL are heavily relied upon to guarantee correctness under contention.
Tcl is used in several interesting or novel ways:
Data that is not potentially in contention from multiple processes and doesn't need transactional semantics is stored in speedtables
Tcl's event loop allows us to easily asynchronously defer processing of messages until we've learned more information over time. We've built a virtual sequencer on top of this to facilitate the use of virtual clocks in scheduling and sequencing commands; this allows us to replay historical scenarios easily.
During a major architectural rewrite in 2015, when we redesigned hyperfeed from a single-threaded program to a parallelized system, we needed to introduce the use of PostgreSQL as a replacement for some of the speedtables used by the older architecture. Tcl's dynamic and introspective nature allowed us to do this with minimal rewriting of code by dynamically redefining procs and introducing a query translation layer.
We have often performed software updates with zero downtime and no restarts thanks to hot code reloading, again made possible by Tcl's highly dynamic nature.
Tcl doesn’t have a build system for extensions, it has a build ecosystem. We expect an extension author to be able to develop and debug in 8 different automation languages across two major tool sets. Programmers who are proficient at all of those are few, far between, and far too busy. Practcl is a robust suite of tools for Tcl to build its own extensions, in Tcl. By automating builds in Tcl we leverage the experience of developers who already write in Tcl. We also eliminate a lot of platform specific brittleness baked into the major tool sets.
This talk will examine Swift Tcl, a bridge between Swift and Tcl, providing deep interoperability between the two languages.
Swift developers can use Tcl to make new and powerful ways to interact with, debug, construct automated testing for, and orchestrate the high level activities of their applications.
Tcl developers can use Swift to gets its sweet, expressive, high performance, scripting-language-like capabilities wherever they would like while retaining all of their existing code.
Either can go in for a lot or a little.
Developers can extend Tcl by writing new commands in Swift and extend Swift by writing new commands in Tcl. Tcl commands written in Swift are tiny and simple compared to those written in C. They are a joy.
Tcl has been a significant component in many EDA applications for many years and it still remains so today. It has worked well, despite the various limits on data structure sizes, because previously in cases where there were large data sets, native C++ data structures have been used while Tcl worked around the edges dealing with reasonable small subsets of data, control files, data structures, and meta data. With the latest integrated circuit manufacturing technologies, even these previously limited data sets are frequently exceeding Tcl's internal size limits like the maximum size of a list, maximum size of string, etc. As EDA goes, the rest of the industry follows, so the number of applications that can reasonably expect to create large lists, arrays, strings etc. is quickly growing.
The body of the paper will discuss several case studies concerning real cases where what used to be reasonable subsets of design data that could easily be expected to be limited to within a fairly small size have suddenly grown to exceed the limits on size of a Tcl list, array, or string. We will look at recent trends in EDA analysis for both the aggregate data (things like the number of devices or nets in a chip design) as well as trends in the size of what used to be 'safe' data sets for Tcl - like the number of devices in a cell on a net, or the number of polygons in a cell on a net, or the length of a verification program deck, the number of cells in a design etc. and show that even reasonably small subsets of data are frequently growing to exceed the limits imposed by Tcl. In addition, customers are requiring more ad hoc exploration and analysis of their designs - these sorts of analysis are ideally suited toward analysis using Tcl.
It is critical today for the EDA industry today that we start moving toward a Tcl 9 implementation that removes all of the historical 32 bit size limitations.
Roasting the TCT, as is present
A review of the history, current practice, and potential future of Tcl's value system. Uses past success as model for perceiving and realizing opportunity, and mapping out paths to get there.
Speedtables has gone through many changes and expansions as it has become core to Flightaware's operations. The Speedtables API, implemented as a common front end to native C tables, over network sockets, and backed directly by PostgreSQL and now Cassandra tables has allowed the development of systems that can run transparently over local and remote data stores.
This talk will look at the major enhancements to Speedtables over the past decade and discuss the lessons learned in the process.
ActiveState has shipped a commercially supported distribution of Tcl--ActiveTcl--since 2001. In the past year we have been diligently working at bringing all of our language distributions, including ActiveTcl, up to to a new standard of excellence. It has been a long process, but we are almost there! This talk will cover what ActiveState state is doing in support of ActiveTcl (and Tcl) and looks at what the future holds.
I will demonstrate a new extended form of TCL, called TSL integrated with SqLite, etc (incorporates portions of Fossil).
The TSL system is developed by Smallscript Corp and used in production servers at "http://www.thelightphone.com", but has otherwise not been released or formally announced. (I'm CTO at TheLightPhone corp)
Notes: I suspect the talk might generate significant interest, so it might be a good if we can schedule a slot in the conference early enough to give time for subsequent BOF discussions.
Given its design and background I chose TCL 2016 conference as an appropriate venue for a formal announcement. The "http://ts-lang.org" site for the language will quietly go online on November 1st.
The Tclers Wiki has been around since 1999 and is arguably the second oldest wiki still running. During that time it evolved from a simple built-in web server using the Metakit database, to deploying as a starkit, to using the SQLite database via TDBC, to using the Wub web server. It has also had a couple of makeovers, the most recent being in 2008. The one constant during that time has been the markup language. Although it has itself evolved, a page authored in 1999 will still render today.
Container technologies and especially Docker are quickly revolutionising the way we architect, develop and operate large applications. This paper presents four interrelated tools integrating the Docker sphere with the Tcl world. Two different containers (Alpine and Ubuntu) aim at serving as the base for Tcl applications. An implementation of the Docker API can aid when glueing containers together or when supervising or introspecting containers. Concocter is a watchdog and dynamic generic controller process for using in containers as the first process. Finally, Machinery integrates all Docker tools (Engine, Swarm, Compose, Machine) to (re)create entire distributed architectures in a flexible and deterministic manner.
Show the things you are working on.
The Raspberry Pi is a breakthrough device, leveraging the power of a modern smartphone processor onto a single board computer at a low price point of US$35. With the latest version having four gigahertz-class ARM CPU cores, a gigabyte of RAM, and a well-supported version of Debian Linux, use of the Pi has grown far behind its original purpose as a teaching computer. Coupling the Pi with an inexpensive USB-based software defined radio (SDR) dongle, open source software, the Internet and, yes, Tcl, we have fielded a worldwide network of more than 8,000 receivers, receiving and backhauling aircraft location, speed and heading information for aircraft equipped with modern Airborne Dependent Surveillance - Broadcast (ADS-B) transponders and where, enough nodes are present, performing multilateration to infer the position of aircraft equipped with older transponders as well.
Nodes boot up, establish an encrypted TLS connection to FlightAware's backend servers, login and begin sending data automatically, crediting the owner for positions received and providing a number of ways for the owner to see how their site is doing, control whether automatic updates are allowed, update their software and view logs through a webpage, view air traffic their site is tracking over the web, and a bunch of other stuff.
Building on standard Debian binary package distributions of Tcl, iTcl, TclX, tcltls, etc, Tcl was an integral part of this effort, allowing us to develop the technology and go from zero to 8,000 nodes in little over two years.
The paper will describe the evolution of our ADS-B receivers from a hodgepodge of expensive technologies (for instance, an Intel-based PC running FreeBSD) AND filling a small NEMA cabinet to the current systems, homebuilt or built and shipped by us to people and places where we need good coverage. It will talk about the evolution of the piaware software, all Tcl based and available on github, lessons learned, and current pain points such as problems with tcltls.
I'll talk about ADS-B and the future of it for air traffic control, briefly go into what a software defined radio is and how it was able to be repurposed for receiving aircraft telemetry data, describe the dump1090 program that receives that data from the radio and interprets it, and the piaware program that establishes the connection to servers at FlightAware, logs in, filters the data to reduce load on the host s Internet connection.
One novel aspect is how we are able to figure out who should be credit for the data received. This is done by looking for current session activity from the same IP address by a registered user.
Also there will be a lot of screen shots and photos. One of the issues is how to keep people interested in maintaining their site and improving their coverage, for instance, migrating from an antenna taped to a window to one on their roof, etc. We do this by generating rankings and graphs and details of flights that their feeders have recently contributed to and so forth. One of the graphs is a polar graph showing what compass directions their positions are coming from. All this is done using Tcl.
Through a web interface people are able to control what information about their site is public, give their site a name, set what interval after an outage they would like to go by before they are notified that there is a problem, whether or not to allow auto-updating of their software, whether or not to allow their site to participate in multilateration to determine the location of aircraft equipped with transponders but not ADS-B and to remotely upgrade, restart their device, examine log files, etc.
An example of the stats page (without user control) can be viewed at https://flightaware.com/adsb/stats/user/dbaker#stats-2162siteIt
After years of many years writing Tcl scripts and bash and POSIX shell scripts, one of the things I've found missing in Tcl is a good coprocessing primitive that makes it easy for pipelines of text to pass through from one coprocessing block to another. Enter "pipethread", a coprocessing framework for Tcl inspired heavily by POSIX shell's pipe (|) operator. pipethread improves upon the POSIX shell module by making it after Tcl's style while retaining all of the same flexibility you would normally use in a POSIX shell script.