DirectoryFeeder, to ensure that incoming jobs are actioned on a FIFO basis.
This worked fine in most cases, until I discovered that the remote application was occasionally returning data containing commas within quotes, e.g.
"Michael", "Fitzmaurice", "Java, Python, XML". This obviously broke my parsing code - logically you and I can see that this string of text contains 3 distinct fields. We know that because we are mentally parsing it based on tokens being the data between quotes. We are also able to easily recognise that the quote ending one token should not be regarded as the quote beginning the next token (hence we ignore the commas separating tokens). However, my code was tokenising based on the comma as token delimiter, and subsequently became confused.
Although this is arguably a questionable thing for the remote application to do (and contrary to their own specification...), the IT department of the company in question had no interest in investigating. The quickest and easiest way around this was therefore to write my own very basic string tokeniser class that would be able to parse the data using rules more similar to the ones used by a human, given text in this format. Writing this class probably took slightly less time than explaining the motivation for doing so on this web page...
For serious backups (e.g. of production systems), we used HP Data Protector across private Gigabit Ethernet connections, but that was overkill for these little data files from my desktop. I don't know the maximum directory size this app can cope with, but it certainly isn't intended for anything major.
You can also see that I now write code using the more widely used (in the Java community, at least) 'K & R' style, end-of-line braces. I switched because I had embarked on the Sun Certified Java Developer project and wanted to comply with the Sun Coding Standard - now I write all my Java code this way. Evolution!