Return to the main TTYtter page
TTYtter Advanced Usage: The TTYtter API
This document covers the API available to version 2.1.x. Because
of the changes introduced with Twitter API 1.1, there are
important differences from TTYtter 2.0.x. You should make
sure that your application or extension works properly with the new release
to ensure forward compatibility.
Using the TTYtter API, you can almost totally customize how the
client interacts with you and displays tweets and direct messages,
add your own custom commands to the client,
drive TTYtter as a completely scripted command line tool, or
use TTYtter as a rapid-deployment engine
to construct Twitter bots. TTYtter takes care of posting and
fetching tweets and DMs, and your application can supersede as much or as little
functionality as it needs to. You don't even need to write loops or handle
any actual transactions with the Twitter server unless you really want to.
Here are the command line options relevant to the API (see the
full list of command-line options):
- -exts=[extension to load,extension to load,...]
- Specifies a list of (recommended) fully qualified filespecs, separated
by commas, which are
passed to the multi-module loader (we'll discuss the loader below). Essentially
the extensions are
directly required by TTYtter and become part of it. Your
extension must return a true value at the end (an idiom like 1; as
the last line will suffice nicely). The order of extensions is salient,
as it determines their load order and call order (we'll discuss that
when we get to the API reference).
The actual "methods" and globals you can and must use are
discussed below, but I strongly advise reading this
whole page first.
- -extpref_*=[option] (optional)
- These options become instantiated inside TTYtter by name and
can be used to pass additional arguments relevant to your extension (e.g.,
-extpref_myextension_logfile=foo.log, which then can be seen as
the global
$extpref_myextension_logfile). Good
practice indicates that you give yourself as unique a namespace as reasonable
with these; although the option name might get long, they are globals, so
you should try to avoid any
collisions. You can put these in .ttytterrc, though they cannot be set
with /set or displayed with /print (but they can be
accessed, like anything else, with /eval).
This supersedes the old
-twarg option, which is now deprecated in 2.0 and
extensions using it should migrate to -extpref_* by 3.0 as it
does not scale in a multi-module environment (see below).
Environment variables
can also accomplish a similar effect, but using -extpref_*
avoids any unusual
restrictions your OS might set on your environment, which are
usually more restrictive than your command line, and keeps it clutter-free.
- -daemon (optional)
- Forces TTYtter to run as a "detached" process in the
background (the PID of the new background process is reported). With no
extensions loaded, this is as if one were running the regular background
update process, but without a console process (so updates continue to appear,
but you must use a command line incantation like
ttytter -status=... or something similar to
actually post, as you are in your shell and not TTYtter).
You must kill the process manually to shut it down.
If an extension is defined, then this is the basis of how to construct a
fully automated bot
(we'll talk more about shortly), as TTYtter will periodically
at the specified interval automatically fetch tweets and DMs
and hand it to your extension(s).
You might also specify
-silent for independent bots.
- -script (optional)
- Forces TTYtter into a fully deterministic "scripted" mode, where
streaming is disabled,
fetches do not occur automatically, user activity is suppressed,
-silent and -noansi are forced on, and all
activity is entirely determined by the extension(s) loaded. Generally not
particularly useful without an extension specified, -script mode
would be used for driving TTYtter entirely from the command line, or
as a tool to be called from things like cron jobs and CGIs.
In fact, see below for -runcommand, which is the perfect option
for those sorts of tasks.
This mode is so important for TTYtter that extensions can
specifically request whether they want -script on or off. Because
-script requires all fetches to be specifically requested by the
extension, -script and -daemon are incompatible,
and because streaming is inherently nondeterministic,
-script disables streaming.
- -runcommand=[command to run] (optional)
- Runs a single command as if you had typed it at the terminal.
Implies -script.
I have always found it easiest to learn by example, so let's look at some
simple but useful examples before we get to the
API reference. It will make a lot more sense if this
is your first time. Before we do, however, consider this next section before
rolling up your sleeves ...
Before getting started: Simple command line queries, piping to
TTYtter, and -runcommand
Sometimes you don't have to write an extension at all. If you are simply
requesting data that you can get with regular TTYtter commands,
then simply piping commands to TTYtter would be enough. Since
this way
you are completely controlling what you want to fetch and do, you should
use the -script option. The examples below assume you already
have your API keyfile created in the default location.
For example, suppose you just wanted to find out the last 20 tweets of user
twitterapi and then grep it for something:
echo "/again twitterapi" | ttytter -script | grep -i banana
Or, maybe you want to find Japanese wannabe poets:
echo "/search #haiku" | ttytter -script
Notice that all of these examples use -script.
The -script argument, as you
will recall, disables automatic updates, sets -silent, and disables
ANSI colour and other settings inappropriate for not-a-terminal.
In the above example,
if you're simply interested in fetching specific types of tweets or filtering
out others, a
-filter option and a set of hashtags could be all you require. See
Command-line options.
Obviously, you are not limited to single commands; you could have an entire
command file if you like, and just pass that in with redirection or as an
argument (ttytter -script my_script_file). However,
you can make your requests very compactly if you are
only sending a single
command by using -runcommand. Instead of fetching your timeline
like this,
echo "/a" | ttytter -script
you can make the request without the pipe by using it as a parameter to
-runcommand:
ttytter -runcommand="/a" # this is faster too
which, as you remember from above,
sets -script automatically and runs the single command you want
for you in one step. As another example, consider what's in my
crontab: ttytter -runcommand="/replies"
If the command you want to pass to -runcommand requires a menu code,
you can simply provide it a tweet ID or a DM ID instead (to pass a DM ID,
prepend the ID with d to disambiguate it, since the ID numbers can
overlap).
Similarly, posting from the command line can either be done by piping
tweets to TTYtter, or using -status (and/or
-hold to make the post bulletproof, and/or -silent
to make it shut up):
echo "Eating other Twitter clients for lunch. Yowza." | ttytter -script
echo "Eating other Twitter clients for lunch. Yowza." | ttytter -status=-
ttytter -status="Eating other Twitter clients for lunch. Yowza."
Scripting is limited to only what commands the console understands,
since
basically you're driving the console. Similarly, it is difficult to hook error
responses for any given particular command since the console does not offer
this behaviour and thus scripted applications cannot be considered bulletproof.
Thus, if you need to change TTYtter's behaviour in a way the
console does not support, or need to write custom behaviour for enhanced
reliability or fallbacks, you will need to use an extension -- which brings us
to our first training-wheels, yet useful, example.
A first example: Displaying more tweet metadata
TTYtter throws away a lot of (in this context) irrelevant metadata
by default when it formats tweets, but suppose you're interested in seeing
a little more under the surface. Here is an example of a more florid way of
displaying tweets. Place this into a separate file (example name:
spammytweets.pl) and invoke it with
ttytter -exts=spammytweets.pl or wherever you stored it:
$handle = sub {
my $ref = shift;
print $stdout ($ref->{'id_str'}, " ", # BUT SEE $streamout BELOW!
&descape($ref->{'user'}->{'name'}), " ",
&descape($ref->{'user'}->{'location'}), " ==> ",
&descape($ref->{'text'}), "\n");
return 1;
};
This introduces the general way of hooking into the
TTYtter API: rather than having
regular Perl subroutines, instead you assign anonymous subroutine
references to specific global scalars (listed below).
This particular subroutine reference, $handle, is called for every
tweet that is to be displayed and is handed a hash reference containing the
individual fields of the tweet. If you don't define a handler for a
particular TTYtter API subroutine reference, then the
default is automatically used for you so that you only need to define
the custom behaviour you want.
The subroutine then pulls out the fields it
wants to display, which are simply keys to the hashref (see the
Twitter API Documentation for
what fields are available in JSON).
Notice that the text fields are passed to a special
internal function &descape, which is a convenience function
that TTYtter uses internally (and exposes to its extensions)
for implementing UTF-8 support and converting character entities and other
metadata. It then builds everything into a string and prints it to the
$stdout filehandle. More about that in a minute.
The handler returns 1 to tell ttytter that one logical tweet
was handled. If it had declined to handle it for some reason, it should
return zero.
Note that the extension just ends. Any other setup that needs to be done should
be done by the extension before it exits. Because an anonymous subroutine
reference is considered a "true" value, this small extension does not need the
1; idiom at the end.
With this extension running, suddenly TTYtter's background updates
take a new form:
92780182 Jenny How Malaysia ==> nice cold morning.. feels so lazy.... :)
92780192 Clifford Dog Desierto de Sonora Mexico ==> @ingridipity: huy que mal, espero que sea Cerveza Ligera ;)
92780202 Kelly Sims Torrance, Ca ==> also, the showcase on ee site is cool to see how others are using it
SUPER IMPORTANT LITTLE UNEXPECTED SNAG:
$streamout versus $stdout: when do you use them?
What happens if you decide to use this with -script?
You're about to get a rather rude surprise:
ttytter -exts=spammytweets.pl -runcommand="/a" # NO OUTPUT!
Nothing prints out!
The reason this doesn't work is where the extension sends its data.
TTYtter defines two filehandles: $stdout, which is used
for user interaction and prompting (and bears that name for historical
reasons), and $streamout, which is used for actual data. When
-script is in effect, all data sent to
$stdout is suppressed so
that you only get data, not all the UI and UX stuff.
So, what do you do? Change the extension to use $streamout and this
will fix your problem, and works both when in the interactive client and
from the command line with -script. Use $streamout for
the actual results of a transaction, and $stdout for everything
else, and you should be good. This is the convention we will use for
the rest of these examples.
An "enhanced" first example: A Twitter filter
Suppose you are only interested in one particular subject, let's say, bananas.
It should be entirely obvious that you can filter on any term just by using
a regular expression search on the relevant JSON field. Yes, you could just
grep the output or use the -filter option,
but this is another way:
$handle = sub {
my $ref = shift;
my $text = &descape($ref->{'text'});
return 0 if ($text !~ /banana/i);
print $streamout ($ref->{'id_str'}, " ",
&descape($ref->{'user'}->{'name'}), " ",
&descape($ref->{'user'}->{'location'}), " ==> ",
$text, "\n");
return 1;
};
Note that tweets that do not match up are not printed, and a zero is returned
to alert TTYtter that the tweet was declined. Only if it actually
is accepted (in this case for printing) is one returned.
The idea of "accepting" a tweet is more closely examined in the third
example.
A second example: Adding a mapping command to TTYtter
Another popular thing to do with TTYtter is add custom commands.
So here is one that is slightly, even, useful, defining a
/gmap command that takes a tweet's menu
code, looks it up, and (if there is Geolocation API information) opens a
browser with that location in Google Maps. Save it to a file like
googlemap.txt and add it on with -exts=googlemap.txt
(or, if you want our previous example running at the same time, combine
the extensions together with -exts=spammytweets.pl,googlemap.txt).
$addaction = sub {
my $command = shift;
if ($command =~ s#^/gmap ## && length($command)) {
my $tweet = &get_tweet($command);
if (!$tweet->{'id_str'}) {
print $stdout "-- sorry, no such tweet (yet?).\n";
return 1;
}
if ($tweet->{'user'}->{'geo_enabled'} ne 'true' ||
($tweet->{'geo'}->{'coordinates'}->[0] eq 'undef')) {
print $stdout
"-- sorry, no geoinformation in that tweet.\n";
return 1;
}
&openurl("http://maps.google.com/maps?q=" .
$tweet->{'geo'}->{'coordinates'}->[0] . "," .
$tweet->{'geo'}->{'coordinates'}->[1]);
return 1;
}
return 0;
};
Let's look at a few of the new things we've introduced here:
- The $addaction reference is called by the console loop after
history substitution, etc. have occurred and gives your extension a chance
to intercept the command.
If your extension wants to do something with it,
it does so, and returns 1 to tell the console it was handled. The console
then assumes it doesn't need to do anything else with the command. Returning
1 is perfectly valid from an error, by the way; the code only means that your
routine said it was acceptable, not necessarily accepted. You
can see that above when the user queries a tweet with no GeoAPI information,
for example.
If your
extension doesn't want to handle this command, it returns 0. If other
extensions in the chain subsequent to yours have an $addaction
subroutine reference, then TTYtter will present the command to
them, one by one in the order specified in -exts=...,
until one of them accepts the command or the command falls through
to the default console handler. (NB: this is the general way the
multi-module dispatch handles round-robin calls. More about that in the
API reference.)
- All our error and information
messages go to $stdout because they are part
of the UI, not the actual result of the command. In fact, this command
generates no output for $streamout at all.
- The tweet code is queried by another library function,
&get_tweet. This library function takes a regular menu code
and returns either a real tweet hashref structure or a faked-up one by
querying the background process. Only some fields are guaranteed to appear in
the tweet structure; the included fields are listed in the API reference. There
is also a &get_dm. If there is no tweet ID in the returned
structure, then it must be bogus, and an error is returned.
- TTYtter takes care of normalizing Geo information for you, so
we know that if the geolocation coordinates are "undef" then they
must be bogus. (Notice this is a string.) Also, if the user
doesn't want to be geo_enabled, then we assume they don't want
us querying their location (by seeing if geo_enabled is
"true" or not). Note that as a side effect of how
Perl is used to evaluate the JSON data, rather than using logical
true and false, the JSON reference instead uses the literal text values
true and false. Don't trust this for fields that aren't
guaranteed to be Boolean.
- If these checks pass, then another library function &openurl
is used to open a URL to Google Maps in the user's browser. (This is the
same function that /url calls.)
Obviously, you can make a whole bunch of little single-command extensions
and mash them together with -exts=... to expand your TTYtter
command vocabulary any way you like.
A third example: A Twitter logger
Naturally, you are not restricted merely to output. For those inclined
towards blackmail, you can create a "logger" that not only displays the
tweets it receives, but neatly organizes them into files:
$store->{'master'} = "$ENV{'HOME'}/twt.bookmark";
if(open(S, $store->{'master'})) {
$last_id = scalar(<S>);
print $stdout "LIB: init last id: $last_id\n";
close(S);
}
$extension_mode = $EM_SCRIPT_OFF;
$handle = sub {
my $ref = shift;
return 0 if ($ref->{'user'}->{'protected'} eq 'true');
my $sn = &descape($ref->{'user'}->{'screen_name'});
my $string = &descape($ref->{'created_at'}) .
" \@$sn " .
&descape($ref->{'user'}->{'name'}) .
" says: " .
&descape($ref->{'text'}) . "\n";
$sn =~ s#/#-#g;
open(S, ">>", "${sn}.twt") || # combine strings for 5.005
die("can't open ${sn}.twt for append: $!\n");
binmode(S, ":utf8") unless ($seven);
print S $string;
close(S);
&defaulthandle($ref);
return 1;
};
$conclude = sub {
print $stdout "LIB: writing out: $last_id\n";
if(open(S, ">".$store->{'master'})) {
print S $last_id;
close(S);
} else {
print $stdout "LIB: failure to write: $!\n";
}
&defaultconclude;
};
This extension not only displays the tweets you receive, but it also organizes
them into individual [screenname].twt files with date stamps and
full names. It will not log users who are protected. Make sure you create
$HOME/twt.bookmark before you start this application -- a simple
touch will be sufficient.
This more elabourate example also illustrates some other standard things:
- All globals used by your extension should be stored as keys within the
$store hash reference.
You can be guaranteed that the $store
reference will always represent the current state of whatever your extension
put in it.
Otherwise, except for globals that are explicitly declared as user-manipulable,
don't clobber or mess with other globals.
- Any initialization routine can just go at the top, without having to be
in an organized subroutine.
- Notice that, like in our second example,
all our informational messages, both in the initialization
section and in the $conclude subroutine reference (we'll get to
it), are being sent to $stdout. If the user wants to shut up the
extension so that there's no verbose debugging information, they can just say
-silent, and everything on $stdout is suppressed (but
data sent to $streamout still goes through).
- This extension manipulates an important TTYtter global called
$last_id, which is the ID of the last/highest tweet received;
TTYtter maintains it for you automatically. The reason why this
routine cares about it is that initially, when $last_id is zero,
it will fetch the last twenty tweets and this could make the logs stutter
with repeated tweets if the client has to stop and start. For this reason,
$last_id is saved in a file ~/twt.bookmark and automatically
set to the correct value of the last tweet logged when this extension
starts.
(I'll talk about how $last_id gets automatically recorded in a
second.)
- The special global $extension_mode tells the extension loader
what this extension wants the state of the -script flag to be.
Normally, this global is set to $EM_DONT_CARE by default,
which, as the name
suggests, means the extension doesn't care what mode it's running in and
will work in either environment.
However, this extension is useless without automatic fetches, which
-script specifically prevents. Because this extension requires
TTYtter making such fetches on its behalf to keep its logs up to
date, it tells TTYtter that it must not run if the
-script flag is set, i.e., $EM_SCRIPT_OFF. If
TTYtter detects that the user added -script, then this
line will tell it to return a fatal error. (Conversely, $EM_SCRIPT_ON
demands that the user specify -script, and throws a fatal error if
they do not. These globals are defined by TTYtter for you. We will
see an example of an extension that demands $EM_SCRIPT_ON in a
minute.)
You should do your best to make sure your extensions work in either
environment, because there is no guarantee that other extensions the user
may be loading with yours will have the same requirements for -script.
Only use $extension_mode if there is no other way for your
extension to function correctly.
- The $handle subroutine reference
looks at the 'protected' value
to see if this user should be recorded.
Please note that checking 'protected'
is considered mandatory behaviour for Twitter API-compliant applications where
private data may be handled or stored. If the user is protected, the
tweet is dropped and zero is returned to indicate the tweet was declined.
- The $handle subroutine reference
looks at the global $seven to
determine if UTF-8 should be turned on. This is good social behaviour.
- The $handle subroutine reference calls a subroutine
&defaulthandle to display the tweet. Every "overloadable"
function has a corresponding default* subroutine (the
default method) to get the
default behaviour. You are under no obligation to call it, but it is
always available; however, you should only call default methods at
the end of your subroutine (for reasons to be discussed below).
The $conclude subroutine reference also calls
&defaultconclude for the same reason, to make sure all its
regular code is run.
- Now to how $last_id gets automatically saved:
the $conclude subroutine reference. This reference is called
at the end of a "pulse" of tweets. When
the tweets are done printing each time, this $conclude reference
then writes it out to ~/twt.bookmark.
Make the above into an automated logging bot "instantly"
It should be obvious that this could be made into a background bot simply
by turning off the output you don't need to have displayed
(i.e., either with the -silent option to suppress
$stdout, or completely
deleting unneeded output lines like the LIB:
output and &defaulthandle), starting
ttytter with -exts=... and -daemon, and then
letting the bot just slurp tweets out into files in the background. Presto,
instant logging bot!
Remember that even though no logical tweets are
being displayed, they are still being accepted, so $handle should
still return 1.
Make the above into a hashtag logger "instantly"
So, let's say you built the automated logging bot above,
but you're having a conference or an interview between lots of people
on Twitter (like #bigshindig). If you're following all those people,
you could filter for the hashtag, and return 0 for tweets that don't contain
it.
But say you're not following everyone in the conversation, or there
are too many people to follow. In that
case, turn off your timeline and then put the hashtag into $track.
Then, you'll only see that hashtag. To do this, insert these lines
at the very beginning:
$track = '#bigshindig';
$notimeline = 1;
A fourth example: A Twitter parrot
First, a word of warning: please don't actually run this, you will
irritate a lot of people! This is a very silly example, but it will
give you a basis on how to create interactive applications. It is
intentionally broken so that it can't be used as is, but yet serve an
educational purpose.
This extension creates a Twitter parrot, which is to say any tweet it can
see, it will tweet again. To avoid an endless loop, it determines the user
it is running as and won't parrot back something it itself has said.
die("I can't run anonymously") if ($anonymous);
$store->{'dontecho'} = $whoami; # this is the username
$handle = sub {
my $ref = shift;
my $sn = &descape($ref->{'user'}->{'screen_name'});
return if ($sn eq $store->{'dontecho'});
my $string = "\@$sn " . &descape($ref->{'text'});
die("broken");
&updatest($string);
&defaulthandle($ref);
return 1;
};
Most of this we have seen before, except for the global $whoami,
which represents the current user screen name, and the
subroutine &updatest, which is used to send a new status for
the current user. It should be obvious to the reader that making a more
interactive system is just a matter of parsing the text of the tweet, and
then tweeting out a smarter or at least less aggravating response.
Fourth example redux: Making the parrot use direct messages instead
Or, you can make it much, much less aggravating by having it only
talk to people who actually directly message it -- hence, surprise surprise,
direct messaging support. Direct messaging operation
is handled almost exactly the same way as regular tweets
except that it uses $dmhandle instead. These changes to the
above example should suffice:
[...]
$dmhandle = sub {
[...]
my $sn = &descape($ref->{'sender'}->{'screen_name'});
[...]
my $string = "D $sn " . &descape($ref->{'text'});
[...]
};
Here, we query the screen name from the sender field, reply using the
Twitter D command in a standard post (you can also call
&updatest with special arguments to make a direct message
without this syntax; discussed below), and return 1 to tell
TTYtter that the direct message was accepted and acted upon.
Note that only a minimal change in logic is required to make this happen
on direct messages instead of public tweets and vice versa.
The "anti-loop" logic isn't really needed here, but is nice to account
for and won't harm anything either.
To make such DM bots effective and the communication bidirectional,
essentially you must be following everyone
who follows you. This can be done with the Twitter API in a programmatic,
delayed fashion, but if you are doing this on a large scale you should
speak with the Twitter developers.
An exercise for the reader: as written, just like in our logger example,
every time the bot starts it will go through its most recent 20 DMs all
over again, even if it had already processed them previously.
Change this example to use a bookmark as well (hint:
$dmconclude and $last_dm).
API reference
Let's now get into the technical details.
The multi-module architecture
TTYtter is designed around a cooperative multi-module architecture,
which allows it to load, manipulate and maintain multiple extensions at once
limited only by memory. It was designed to be similar enough to the
long-obsolete
single extension system so that many extensions would require no or minimal
adjustment to function in the new multi-module world, and to use the same
general concepts of redefining subroutine references to override desired
portions of the TTYtter API, while still allowing the expansion of
TTYtter's capabilities in any way the user wants.
When TTYtter starts up, it completes its own initialization and
then enters the multi-module loader. The loader examines each extension
in the -exts=... option, in the order they are specified. The
extension is then required into TTYtter, allowing it to
execute
its initialization code and define its API subroutine references, and then each
subroutine reference it defines (of the known API
subroutine references) is examined and
assigned to a dispatch table to be called by the multi-module dispatch.
Error checking is also done at this point to prevent multiple extensions from
hooking
API subroutines that cannot be shared (more on that in the next subsection) and
to enforce the extension's requirement for the -script option.
During TTYtter's execution, when it reaches an API hook point,
it enters the multi-module dispatch. The dispatch goes through each
of the defined subroutine references for the API hook in the order defined
(therefore, in the order specified in -exts=...) and calls up to,
though not necessarily, all of them, and finally the default method if any of
the subroutine references called it during their execution. The default
method is only ever called once in actuality, even if all the extensions
requested it to be called.
What you can do, what you can't do, what you should do and what you
must not do in multi-module land
Generally, most extensions don't even have to worry about the internal
implementation. As long as they stick to their own namespace and behave in
a "rational" manner, the details are generally irrelevant. However, there
are some benefits and restrictions that go with the territory.
- Under multi-module, an extension can:
- ... mask another extension. Certain API subroutine references
are maskable, allowing an extension to dynamically
mask a particular method
and, at will and on the fly, block subsequent extensions accessing it
from executing. This is most useful with references such as $handle,
where it makes things like filters possible, or $addaction,
allowing you to override any TTYtter command.
Not every API hook can
be masked, nor is it relevant for all of them. Maskable API references are
listed in the table below.
- Under multi-module, an extension can't:
- ... call a default method anywhere but at the end of a
subroutine reference. Default methods are now always terminal.
Remember that the multi-module dispatch only
executes the default method at the very end of evaluation, and then only if
any of the hooked extensions had requested it. If you call a default method
in the middle of your subroutine reference, at best the call will not happen
when you expect it, and if your extension depends on the return value of the
default method, your extension may not work right at all because it will
never get one. Furthermore, if you call a default method that the
dispatch is not expecting, it may never execute, so you should only
ever call the default method for the current API subroutine reference.
While default methods were not necessarily terminal in
older versions of TTYtter, they are now. The only things you
should be doing after a default method call are either terminating the
subroutine and/or returning a value.
- ... hook an exclusive reference another extension has already
hooked. Certain API subroutine references are exclusive,
which is to
say only one extension can override them (the first one to ask, essentially);
otherwise, the client behaviour would either be unsafe or undefined.
If multiple extensions try to
hook that method, the multi-module loader will return an error and refuse
to start. These
methods are determined either by security ($getpassword) or
technical ($autocompletion, etc). reasons. They are listed
in the table below.
- Under multi-module, an extension should:
- ... keep its variables, namespace
and state to itself. This can be
accomplished in a variety of ways; the easiest is simply to use my
for everything and keep the scope local. If this is not possible, then store
all globals as keys within the $store hash reference, as we did
in the examples above. If you want to define and use library subroutines,
assign them as subroutine references to $store, and call
them that way.
Other than those officially supported methods, do your best to keep the
namespace clean. You may be able to get away using tricks with
package, but this is not supported, and in that vein
you also should avoid
use to set custom pragmas and load other modules into the Perl
namespace unless you know what you're doing.
- ... get and set user settings through the established
&getvariable and &setvariable getter/setter.
While
many TTYtter globals can be manipulated and set directly, this
behaviour is deprecated as many user options trigger internal actions
(such as precompilation steps),
and some others need to be propagated to both
the foreground and background process. These functions are described in the
library routines below. The keys for these functions are the same as the
named command-line options.
- Under multi-module, an extension must not:
- ... change the API subroutine references it defines after it defines
them. Obviously you can change any of your own subroutines you make and
define and call yourself, but once you have defined anything for the
API, you must not change it. The multi-module dispatch is optimized such that
it assumes the API references it calls
are immutable after definition, so such behaviour is currently
undefined, and even if it works now is in no way guaranteed to work in the
future. If you really need to call a mutable subroutine, make the
call itself immutable, and use that call as the subroutine
reference you hand TTYtter, which then can do whatever it wants.
Superclassable subroutine references
These subroutine references can be used to replace or augment TTYtter
behaviour. Falling
back on the default behaviour is optional, but is always available using
the &default* subroutine (e.g., the default $handle
routine can be called with &defaulthandle). The default
routine expects to be called with the same arguments the "super-routine"
was called with. Only the default routine for this particular method
should be called within subroutine references, e.g., only call
&defaulthandle within $handle. Also, default routines
are terminal: if you desire to call the default method,
it should be the last thing your reference calls before
returning a value or terminating.
- $addaction (argument 1: command line) (maskable)
- Called after initial commandline processing by the default console to allow
the implementation of custom commands or to override internal commands (except
/quit, etc., which obviously probably shouldn't be overridden). If the
routine returns 0, then TTYtter assumes that the routine does not want
or recognize the command line it was provided, and continues with processing.
If the routine returns 1, then TTYtter assumes that the routine
accepted (or at least wants to suppress further processing of) the command
line for its own internal processing, and no further processing is done.
Default behaviour is to return 0. This routine is one-way, i.e., if you rewrite
the command line within $addaction, TTYtter will discard it
and resume with its own copy. If you want to actually alter the command line
itself and have TTYtter process that (e.g., macro or alias
substitution), look at $precommand.
This API reference is maskable -- the first extension to return 1
masks all subsequent extensions from receiving the command, including
the default handler.
- $autocompletion (argument 1: text to be completed,
argument 2: state of current command line, argument 3: position within
line of text to be completed) (exclusive)
- Called, if readline mode is enabled, by the operating
Term::ReadLine::* driver whenever TAB completion is needed. An
array of fully-qualified likely choices is expected as a return value. For
an example of how such a routine would operate,
look at &defaultautocompletion. You should be
familiar with Term::ReadLine to make the most of this hook.
This API reference is exclusive for technical reasons.
- $conclude (no arguments)
- Called at the end of each cycle of tweet processing. Default behaviour is
to display the count from
-filter, if any tweets were discarded and a count was requested.
Return value, if any, is discarded.
- $dmconclude (no arguments)
- Called at the end of each cycle of direct message processing. Default
behaviour is to do nothing additional. Return value, if any, is discarded.
- $dmhandle (argument 1: hash reference) (maskable)
- Called when a direct message is to be displayed or otherwise
handled in some manner. The
keys of the hash reference are based on those specified by the Twitter API.
Note that as a side effect of Perl's interpretation of the JSON, logical
true and false in Boolean fields are rendered as literal text
"true" and "false". The routine, naturally, is not obligated
to generate any output if it desires. Default behaviour is to display the DM
formatted to standard options (using &standarddm [below]) with
the sender's
name, time stamp provided by Twitter, and the text of the direct message.
For success, the number of "logical DMs" handled
(almost always one) should be returned.
If the DM was declined for processing, a zero value should be returned.
If you like, you can pass your hash reference to &standarddm for
the "default" formatting, which will return a string formatted according to
whatever standard options are set (such as -timestamp, -wrap,
-ansi, etc.)
This API reference is maskable -- if an extension returns zero, then
all subsequent extensions will not be passed the direct message. Only
if the extension returns 1 will it be passed to subsequent extension
references. However,
the default method will always
be called if this extension or any prior to it had
called it during this run of the dispatch.
- $eventhandle (argument 1: hash reference) (maskable)
- Only operational in streaming mode. Called when a streaming event
is to be displayed or otherwise handled in some manner. The keys of the
hash reference are based on those specified by the Twitter Streaming API.
Note that as a side effect of Perl's interpretation of the JSON, logical
true and false in Boolean fields are rendered as literal text
"true" and "false". The routine, naturally, is not obligated
to generate any output if it desires.
Default behaviour is to display the
event formatted to standard options (using &standardevent
[below]) depending on what the event is. Delete events usually are highly
impoverished and are simply displayed informationally; others have a "verb"
field which can be used to write a more complete description of the event.
For success, the number of "logical events" handled
(almost always one) should be returned.
If the event was declined for processing, a zero value should be returned.
If you like, you can pass your hash reference to &standardevent for
the "default" formatting, which will return a string formatted according to
whatever standard options are set (such as -timestamp, -ansi,
-wrap, etc.).
This API reference is maskable -- if an extension returns zero, then
all subsequent extensions will not be passed the event. Only
if the extension returns 1 will it be passed to subsequent extension
references. However,
the default method will always
be called if this extension or any prior to it had
called it during this run of the dispatch.
- $exception (argument 1: exception number, argument 2: exception
text)
- Called when a non-fatal exception is received during processing of tweets.
Argument 2 is guaranteed to be human readable text corresponding to argument 1,
which comes more or less from this table:
- 1: timeout or no data
- 2: Twitter error or status message received (including Fail Whale)
- 3: legacy rate limit error (see below)
- 4: unexpected HTTP return code not caught by other exceptions
- 5: automatic fetch stopped due to rate limit
- 6: more tweets received than menu codes available; best effort made
- 10: JSON cut (unexpected end)
- 11: JSON null list (missing or corrupt array reference)
- 13: ID overflow
- 99: JSON could not be parsed (in -verbose mode, the offending
data and the syntax tree will be dumped) (new in 2.1)
(Numbering gaps are for historical reasons.) The exception number
code is provided simultaneously to facilitate localization or custom
notification. Exceptions passed to $exception
are designed to be informative only, as TTYtter can
recover from these errors and automatically try again.
Fatal errors are raised immediately and the extension does not receive
notification for technical reasons. Default behaviour is to print the error
text to standard output.
Return value, if any, is discarded.
Twitter is now using a generalized error reporting method to indicate
server-based exceptions, including hitting the API rate limit. All of these
errors are swallowed up under code 2; to distinguish them, check the
error text. Although the old rate-limit trigger message is still parsed for,
and can still theoretically generate the legacy
code 3, this message has been replaced
by the new reporting convention in practise.
By default, $exception is not maskable and all extensions
that are loaded get notified. However, if you
want to suppress error reporting for some reason, you can make
$exception maskable with -exception_is_maskable (or
exception_is_maskable=1 in your .ttytterrc), and return
1 from your extension to prevent the error condition from further
propagation. This is not recommended unless you know what you're doing,
because you can suppress error reporting to the client and not be aware of
server status. "Don't Blame TTYtter."(tm)
- $getpassword (no arguments)
- Called when a password is required. Your routine should look at
$whoami, and use that to figure out how the password should be
fetched. By default, this asks the user, but you could consider more
clever means such as the Mac OS X keychain. For example, here's a
way of getting the username and password from the command line, provided
by Uwe Dauernheim; conversion to an actual extension is an exercise for
the reader:
USER=`security find-internet-password -s "twitter.com" | grep "\"acct\"" | sed "s/.*\"acct\"<blob>=\"\(.*\)\".*/\1/"`
PASS=`security 2>&1 > /dev/null find-internet-password -gs "twitter.com" | sed "s/password: \"\(.*\)\"/\1/"`
If a password is not needed, this routine is not called. Therefore, this
is only relevant if -authtype is not oauth, because
TTYtter never asks for your password in that case
(just the OAuth-provided single-use PIN).
This API reference is exclusive for security reasons.
- $handle (argument 1: hash reference, [optional] argument 2:
origination) (maskable)
- Called when a tweet is to be displayed or otherwise
handled in some manner. The
keys of the hash reference are based on those specified by the Twitter API.
Note that as a side effect of Perl's interpretation of the JSON, logical
true and false in Boolean fields are rendered as literal text
"true" and "false". The routine, naturally, is not obligated
to generate any output if it desires. Default behaviour is to display the
tweet formatted to standard options (using &standardtweet
[below]) with screen name and tweet text, and with the tweet's
menu code prepended. For success, the number of "logical tweets" handled
(almost always one) should be returned.
If the tweet was declined for processing, a zero value should be returned.
If you like, you can pass your hash reference to &standardtweet for
the "default" formatting, which will return a string formatted according to
the return value of $tweettype and
whatever standard options are set (such as -timestamp, -ansi,
-wrap, etc.).
An optional origination argument may also be passed, giving the routine
information about the user command the tweet originated from. This allows
your routine to distinguish old and new tweets reliably.
Origination classes defined currently are a null string, meaning a
new tweet; replies,
meaning tweets from the /replies command; or
again, meaning old tweets (usually from the /again command).
The distinction is important; replies does not appear on new replies,
but only on replies that you ask for. This is on purpose to ensure that API
activity results are consistent and match up. Note that again
overrides replies, and since /again can sometimes pull up new
tweets, the originator is blanked on purpose for those new ones so they are
properly seen as new. The complexity here is mostly intended for those clients
who want to distinguish old and new tweets, or tweets that a user requested
versus tweets that were automatically fetched, and handle them differently.
If you actually want to change how TTYtter classifies tweets
internally, regardless of their age, see $tweettype.
This API reference is maskable -- if an extension returns zero, then
all subsequent extensions will not be passed the tweet. Only
if the extension returns 1 will it be passed to subsequent extension
references. However,
the default method will always
be called if this extension or any prior to it had
called it during this run of the dispatch.
- $heartbeat (no arguments)
- Called at the beginning of each automatic refresh cycle (in either
daemon or interactive mode). Default behaviour is to do nothing additional.
Return value, if any, is discarded.
- $main (no arguments) (exclusive)
- This is TTYtter's main loop. Default behaviour is to operate
the console, i.e., initialize the
history, print an initial prompt, and then accept data line by line from
standard input until a terminating command is received or the input stream
ends. Return value, if any, is discarded, and TTYtter will terminate
completely when the routine is exited.
If you redefine this subroutine reference, then your extension has complete
control of the application.
Nothing says your main loop actually has to take user input, by the
way -- it can do its own thing and ignore standard input completely,
and even drive TTYtter itself with internal commands using
&ucommand.
This hook can also be used to run code after initialization of the client
but prior to accepting user input. If so, your code should end with something
like goto &defaultmain; to transparently return control
to the default handler.
This API reference is exclusive for technical reasons.
- $precommand (argument 1: command prior to processing)
- Called as soon as a command is received for processing, even before
history substitution. Allows you to implement your own preprocessing.
The new command should be returned as a single response, and is subject
to things like % substitution and so on. Default behaviour is to
just return the same command without further pre-substitution. Although you
could also attempt to intercept and handle custom commands here too,
$addaction is a better choice for that purpose as it is maskable.
This API reference is telescoped: the output of prior extensions is fed to
any subsequent ones. The order of evaluation is, as always, determined by
their order in -exts=....
-
$prepost (argument 1: tweet prior to posting)
$postpost (argument 1: tweet after posting)
- These two are paired, so they are listed together. $prepost
is called when a tweet is about to be URL-encoded and sent, allowing you
to implement your own tweet preprocessor (such as, say, a translation or
shortening
service). The new tweet should be returned as a single response. After the
tweet is posted, $postpost is called with the final tweet (which
barring an act of God or cosmic radiation should be the same as what
$prepost returned), which is useful for tools such as loggers.
Default behaviour for the former is to simply
return the same tweet without further pre-substitution, and for the latter,
to do nothing.
$prepost is telescoped: the output of prior extensions is fed to
any subsequent ones. The order of evaluation is, as always, determined by
their order in -exts=....
- $prompt ([optional] argument 1: information only) (exclusive)
- Called every time a prompt is to be displayed by the console.
Default
behaviour is to display TTYtter>, followed by a separating
space. If ANSI colour is enabled, the prompt is displayed in
cyan.
The prompt is only printed by the default
handler if -readline
is not enabled (otherwise the prompt is maintained
by ReadLine). If optional argument 1 is true, then the prompt and
its screen width (which may not be the same as its length) are returned as
a list for interested subroutines. When the
prompt is printed or requested, the wordwrap subroutine is hinted to use
it in calculations by setting global $wrapseq to zero.
This API reference is exclusive for technical reasons.
- $shutdown (no arguments)
- Called when a normal shutdown is occuring, viz., a shutdown
initiated by the user or by TTYtter as part of expected operation.
Unexpected shutdowns such as &screech intentionally do
not call this routine, so your extension should be prepared to
handle that possibility. Default behaviour is to do nothing additional.
Return value, if any, is discarded.
- $tweettype (argument 1: hash reference, argument 2: screen
name, argument 3: tweet text) (maskable)
- Called to determine what class a tweet should be (which should be
returned as a string); DMs are discovered separately (there is no
corresponding $dmtype). The four standard tweet classes are
me reply search default; you could define other classes for
use, say, with the notification framework (discussed below) or with your
own custom $handle routine. To fall
back on the standard tweet class selection, simply
return &defaulttweettype($ref, $sn, $tweet) for ones you
don't want to classify yourself.
This API reference is maskable, but in a slightly unusual way --
any extension reference that does not call the default method (and
returns an explicit tweet type) is considered to mask all subsequent
references, as it is assumed routines calling the default method are
signaling they do not know how or want to classify this tweet.
- $userhandle (argument 1: user object reference) (maskable)
- Called to display a user object. This is the routine that displays
the two-line user text from /followers and like-minded commands.
This API reference is maskable -- the first extension to return 1
is considered to have handled the display of the object. An extension can
simply passively observe by merely returning 0, indicating it declines to
show it (or wishes to pass it on). If no extensions return 1, meaning none
of them elected to terminally
display the user object, then the default routine is called.
Writing custom notification drivers and using custom tweet classes
Your API extension can define a custom notification driver, which is handled
differently from the above. In general, the argument
to -notifytype=... is turned into a function name using the
notifier_ prefix, e.g.,
-notifytype=growl indicates that the subroutine
notifier_growl should be called for notifications. Thus, if you
wanted to define, say, a notifier_email subroutines for E-mail
notifications, then you invoke it with -notifytype=email after
including it with -exts=....
The notification subroutine is called in two ways: with no arguments (or
more accurately, with a single argument of undef) during
initialization, and thereafter with three arguments: the class (as
determined by $tweettype), the tweet string
as processed by &standardtweet as a convenience, and a
hash reference to the tweet in case you want to format it yourself.
Your routine will then handle the notification, and return. Return value,
if any, is discarded.
The default drivers included (¬ifier_growl and
¬ifier_libnotify) are instructive examples. Both are
formatted in the same basic way: during initialization they seek out
their required utilities (growlnotify and notify-send
respectively) and store them in an appropriate global, and then as
tweets are passed to the notifier they then pass them off to their
dependent utility with the correct command line arguments on standard input.
Neither of the built-in drivers
do anything special with the class currently. However, your driver may
decide to in fact do special things with each class, and your customized
$tweettype method, if you choose to write one, can tag tweets
with new classes that your custom notification routine can handle (say,
a class peter for tweets from your friend Peter; for those
tweets, $tweettype would return 'peter', and you would add
peter to your list of classes in -notifies=...).
This is entirely supported, and you can of course simply fall through to
&defaulttweettype for other tweets that are not from Peter to
get default behaviour otherwise.
Note that notification routines
are called only for new tweets; tweets identified as old do not
get passed to the notifier to avoid ping-ponging.
Library routines
These routines are explicitly designated as available for calling from a
user application. Other routines may also be utilized, but are not guaranteed
to maintain compatible naming or calling convention in future versions. They
are not overridable.
- &descape (argument 1: data, [optional] argument 2: entity
mode)
- General text decoding routine. This single entry point is responsible for
HTML ampersand-entity decoding, converting
escaped UTF-8 characters into their correct/desired form
and returning the de-escaped data.
You should call this routine if you reasonably
expect the string to contain such data, or you are preparing it for user
display.
If the data contains UTF-8 entities and UTF-8 is disabled
with -seven, then the entities are rendered as dots ('.').
The processed data is returned.
If optional
argument 2 is true, then UTF-8 entities are rendered in HTML "ampersand" form,
even if UTF-8 is "off." This is particularly useful for web applications.
&descape will also convert many
ampersand-escaped entities into ASCII as well, unless argument 2 is true
(in which case it assumes that you wish them to remain ampersand-escaped
like the UTF-8 entities will be).
- &getbackgroundkey (argument 1: key)
- Asks the background process for the specified key in $store.
This is useful for using TTYtter's built in IPC to get the state
of the background process, or to get data from an extension running in
the background process. Only a string is returned; pre-serialize your data
yourself. The IPC protocol truncates keys to 15 characters. See
&sendbackgroundkey for more information on this feature.
You can only make this call from the foreground.
- &get_dm (argument 1: menu code)
&get_tweet (argument 1: menu code)
- Both library functions
return a hash reference corresponding
to the DM or tweet
specified by the menu code given, or undef if not found. If the
actual tweet is in foreground memory, that actual hashref will be returned;
otherwise, the console will ask the background and make a "fake" smaller
hash reference with only essential fields. You should only rely on the
essential fields listed here; you are not guaranteed to get the entire
tweet structure. If the returned reference has an undefined or zero ID field,
you should also assume the requested reference is invalid and/or does not
exist, and not use it further.
If you are querying a direct message,
only the
ID (as $key->{'id_str'}),
sender (i.e. $key->{'sender'}->{'screen_name'}),
creation time (created_at) and
text are guaranteed part of the reference.
If you are querying a tweet,
only the ID, source, in_reply_to_status_id,
geolocation data, sender
(i.e., $key->{'user'}->{'screen_name'}),
creation time (created_at) and
text are guaranteed part of the reference.
- &getvariable (argument 1: variable name [as string])
- The explicitly designated getter for user settings. Call this function
with a variable name as a string (as specified in the list of
command-line options),
and the variable will be returned
to you. Illegal requests will return undef. While you can access
most settings variables directly, this is now deprecated, as there
are virtual settings you can only get from this getter routine and there
will be more in future versions.
This is the same routine the /print command calls.
See also &setvariable.
- &grabjson (argument 1: URL, [optional] argument 2:
last ID, [optional] argument 3: don't authenticate,
[optional] argument 4: number of items to request)
- Asks URL for a JSON data source, calls &parsejson on it
to turn it into a variable structure (q.v.),
and returns either a reference to a hash, reference to an array, or a scalar,
or an undefined reference if there was a problem. As with
&parsejson, severe or unexpected
parsing errors will simply return undef (this is a change from
2.0.x; be prepared to handle this situation).
Note that the URL need not be a Twitter URL; you can call this on any
JSON source and get a ref back, if TTYtter can parse it. If
optional argument 2 is true, then a since_id= is added to the request
for you. If argument 3 is true, then authentication information
is not sent with the request (this is a legacy option for compatibility,
and should never be used with Twitter API 1.1 which requires all access
to be authenticated).
If argument 4 is specified, then a count= is added to the request
for you (you are not guaranteed to get that number, but you are guaranteed
to not receive more than that number).
Figuring out how to interpret the reference is your problem;
however, &grabjson will do some normalization on the reference
if it appears to be coming from the Search API to make it look like a regular
Twitter API response, and does other adjustments to the fields for
consistency if they are also coming from Twitter or a Twitter-like API.
Note that as a side effect of Perl's interpretation of the JSON, logical
true and false in Boolean fields are rendered as literal text
"true" and "false".
If you just want the contents of that URL without any parsing, then use ...
- &graburl (argument 1: URL, [optional] argument 2:
POST data)
- Uses TTYtter's user agent to execute an HTTP GET to
fetch the desired URL and returns the contents as a string scalar without
further interpretation. If optional argument 2 is specified, then it is used
as POST data and the request is sent as a POST instead.
You should make sure you have done all required encoding prior to calling
this routine.
- &openurl (argument 1: URL)
- Opens the URL in the user's browser according to the
current -urlopen settings.
- &parsejson (argument 1: JSON text)
- Interprets JSON into a Perl variable structure, returning either a
reference to a hash, reference to an array, or a scalar, or an undefined
reference if there was a problem.
Note that as a side effect of Perl's interpretation of the JSON, logical
true and false in Boolean fields are rendered as literal text
"true" and "false".
This is the same routine &grabjson calls (and
&postjson, for that matter).
In general, however, it is more efficient and less trouble to just use
&grabjson to fetch from a URL and parse it in one step
than it is to grab the URL with
&graburl and feed it to &parsejson.
- &postjson (argument 1: URL, argument 2: POST data)
- Makes a POST request to the specified URL. If the data is null, it may
be converted into a GET depending on your user agent, so pass something if
you want to be safe. Always sends authentication information if available.
Just like &grabjson, this routine has all the parsing quirks
of &parsejson, which it also calls. Twitter supports a means
of converting this to a DELETE with the pseudo-argument
_method=DELETE. Not all Twitter-alike APIs may support this,
let alone non-Twitter APIs.
- &screech (argument 1: error text)
- Emit error text to standard output (ring the bell if supported),
and kill and shutdown immediately. Used as an escape hatch for unsafe
situations, or fatal errors. Note that this bypasses $conclude
and $shutdown, and
as such, no further notification is given to any extension that a fatal
condition has occurred. This routine does not return.
- &sendbackgroundkey (argument 1: key, argument 2: string)
- Sends the string to the background process using
TTYtter's built-in IPC facility, which stores it in the calling
extension's $store hashref with argument 1 as the key. Only strings
are supported; serialize your funky structured data your own self. If you
pass a null second argument, then the key is set to undef. Keys
are truncated to 15 characters due to the TTYtter's IPC
protocol requirements. The foreground can use the analogous
&getbackgroundkey to fetch as well (q.v.).
This is
how messages can be passed back and forth between your extension's foreground
instance and its background instance. Say you have a command that needs to get
data from the background. Your extension in the background has something
hooked into $heartbeat that looks at a predefined place, like
$store->{'command'}, for a command; it does an operation based
on that, and saves the result into $store->{'result'}. In the
foreground, your extension would use &sendbackgroundkey to
send a command string to key command, and pick up the result from
key result with &getbackgroundkey either asynchronously
or block and busy wait until a result is received.
Keys are local to the extension store, so you don't have to worry about other
extensions stomping on your namespace. You can only call this function (and
&getbackgroundkey) in the foreground.
- &sendnotifies (argument 1: tweet reference,
[optional] argument 2: origination)
&senddmnotifies (argument 1: DM reference)
- These functions can be used by an extension to independently
raise notifications without having to call
&defaulthandle and/or
&defaultdmhandle. These are most useful where you have
printed a DM or tweet, but don't want the default handler to print another
one. They should be passed with the same arguments as
&defaulthandle and
&defaultdmhandle.
- &setvariable (argument 1: variable name [as string],
argument 2: variable value [anything],
[optional] argument 3: interactive)
- The explicitly designated setter for user settings. Call this function
with a variable name as a string (as specified in the list of
command-line options) and the value to store into
it. This setter will then trigger whatever side effects need to occur and
synchronize whatever state needs to be synchronized with the background
process. Errors are logged to $stdout; success returns 0 and
failure returns 1. If optional argument 3 is true, then success is also
logged to $stdout.
&setvariable can set read-only variables -- but
only during the multi-module loader phase (i.e., only during
the initialization step of your extension). Once TTYtter
has started up and handed control to the main loop,
then read-only variable settings are locked and may not be changed without
a restart.
Please be nice and don't try to write to them directly anyway;
this is not supported,
not socially friendly, and almost certainly won't work in future versions.
This is the same routine the /set command calls.
See also &getvariable.
- &standardtweet, &standarddm,
&standardevent (argument 1:
hash reference)
- These routines
return the default pre-formatted string according to any user-specified
arguments for a tweet, DM or event, respectively,
indicated by the hash reference passed it. See $handle,
$dmhandle and $eventhandle respectively.
- &ucommand (argument 1: console command)
- Executes a command as if you had typed it at the console, even if you
have hooked $main or are not using the console. This is mostly
useful for running user commands with consequences (but while you could use it
for the /set command as well, for example, &setvariable
is a bit more transparent for that specific purpose). You cannot use
&ucommand during your extension's initialization, however,
because there is not enough state information instantiated yet to run
commands;
doing so will cause an error to be raised by the multi-module loader.
- &updatest (argument 1: status text, [optional] argument 2:
interactive mode, [optional] argument 3: in reply to status ID,
[optional] argument 4: in reply to user,
[optional] argument 5: retweet ID)
- Update the current user's status with status. If optional
argument 2 is true, any error condition will also be displayed on
standard output. If optional argument 3 is specified and non-zero,
then the posted tweet
will have its in-reply-to value set to the specified ID.
If optional argument 4 is specified, then the tweet is converted to a
direct message to the specified user.
If optional argument 5 is specified, then the tweet is converted to a
retweet of that tweet ID (assuming -nonewrts is false); the
status text should still be set so that prompts in interactive mode function.
A status value
is returned: zero if the post was successful, or a return code
dependent on Lynx or curl if not (see respective documentations).
Return code 97 indicates the user decided not to post the tweet (refused
by -verify or -slowpost).
Return code 99 indicates that the subprocess could not even start, possibly
due to change in the filesystem or permissions.
Exposed globals
These globals are explicitly designated as available or permissible for
user operation and manipulation. Globals specific to the extension itself
should always be stored as keys within the hash reference
$store; manipulating other globals
could interfere with normal TTYtter operation, and are not guaranteed
to retain the same function or name in future versions.
For globals that are instantiated by user command-line options and/or
/set, you should use &getvariable and
&setvariable
as their getter/setter respectively. While you can try to access
them directly, they are not guaranteed to be current, and setting some
options have side effects that simply modifying the globals directly will
not replicate. The variable key(s) for these functions are the same as the
named command-line option and/or runtime variable; see above for a more
detailed description.
- $TTYtter_VERSION
-
The current major/minor version of TTYtter (currently a string
containing a float value, but was previously
a float value [so version 0.7 was represented as 0.7, but now 0.9 is
"0.9"]). This doesn't include the
patch level; see the next variable for that.
- $TTYtter_PATCH_VERSION
-
The current patch level of this release of TTYtter, loaded into
a separate scalar for backwards compatibility; although
this was not defined prior to 0.5.1, the patch level of previous versions
can be safely assumed to be patchlevel 0, and as such a nice idiom is to use
(0+$TTYtter_PATCH_VERSION) which is guaranteed to give zero for
old versions and will not bug out on current ones. This is an integer,
so $TTYtter_VERSION 0.5 and $TTYtter_PATCH_VERSION 1
indicates version 0.5.1.
- $extpref_*
- Any command line option added by the user or placed in .ttytterrc
starting with extpref_ is turned into a global (so
-extpref_bletch_foobar=5000 sets $extpref_bletch_foobar
to 5000). By definition
these are external to TTYtter and it does not do anything with them
further; their behaviour is fully defined by your extension. Because these are
global, you should reasonably make sure that the name you choose for your
option will not trample on the namespaces of other extensions the user may
have loaded.
- $whoami
- The current screen name in use.
It is not immediately instantiated until authentication is complete and
the client main loop has started, so you should not assume it will be
populated during your extension's initialization. You should treat
this global as read-only:
changing $whoami does not necessarily change the credentials sent
by TTYtter. If you are authenticating with OAuth, this is only
set if -status is not set, because TTYtter doesn't
need your screen name simply to post with an OAuth keyfile.
- $parent $child
- The PID of the parent and child processes. In interactive mode, both are
defined; in daemon mode, only the latter (the parent ID becomes zero).
Neither should be modified, or processes may not get properly
terminated.
- $last_id
- The last/highest tweet ID so far processed. It starts at zero, but may
be advanced to skip tweets as $handle (or &defaulthandle
where not specified) is only called for new tweets, viz., tweets with an ID
higher than $last_id. Even if $handle returns zero for an
arbitrary tweet, that tweet's ID is still considered for $last_id.
- $last_dm
- Analogously, the last/highest direct message ID so far processed. Its
behaviour is exactly the same as $last_id, including starting from
zero on startup, and even if $dmhandle returns zero for an arbitrary
DM, that DM's ID is still considered for $last_dm.
- $lasttwit
- The last successful tweet (empty if no tweets have been made). This is
not carried from session to session.
- $streamout $stdout
- The two file handle references handling, respectively, the results of
a command and information and prompts, as copiously discussed above.
- $CCme $CCreply $CCdm $CCsearch $CCprompt $CCwarn $CCdefault
- The surface manifestation of the -colour* command line options,
viz., the actual printable terminal sequences. You should use this rather
than specifying an ANSI sequence when displaying a particular tweet class.
Send comments and six-packs of Mr Pibb to
ckaiser@floodgap.com,
or return to the main TTYtter page.
Cameron Kaiser