GLOSSARY

Data Transfer Glossary

This section of the site defines or explains terms, abbreviations, protocols and much more to create a Data Transfer glossary. It is perfect for Windows administrators, sysadmins, IT architects and project managers interested in learning more about data transfer and file sharing.

To get started, click a LETTER below:

A

AD

AD (pronounced “ay, dee”) is an abbreviation for Microsoft Active Directory, a very common external authentication system used in the file transfer industry to centralise authentication, user account information and access control. See “Active Directory” for more information. 

 

Ad-Hoc File Transfer

The term “ad hoc” or “person to person” file transfer is better described as someone wanting to send a file to another person, generally on a ‘one-off’ basis.

To elaborate, let’s set the scene. It’s 5.30 pm on a Friday afternoon. You have been working on a really important proposal for a new client that’s due for submission and you just HAVE to send it now. It could be that the file is too large for your email server to process as an attachment or that it’s super sensitive and you need visibility of receipt by your client rather than the send and forget approach of email. What do you do?

It’s in scenarios such as this where ad hoc file transfer comes into it’s own. It offers businesses a quick and simple means of sending large or sensitive files minus the hassle of creating and managing end user accounts on FTP servers or stressing about whether it got there via email.

An ad hoc file transfer solution will quite simply allow you to create a single job, enter the email address of your recipient and press send. That’s it. No further details concerning your recipient are required and all they need in order to receive the file is a standard email account.

Ad Hoc file transfer typically is implemented by businesses that need to gain greater visibility of files leaving and entering the organisation. More often than not these days they’re implemented to replace consumer grade solutions like Dropbox or other cloud based systems.

Do you have a requirement to send large or sensitive files on an ad hoc basis? If so and you’d like to find out more about the ad hoc file transfer solutions supplied by Pro2col, please don’t hesitate to contact us on 0333 123 1240.

 

AES

AES (“Advanced Encryption Standard”) is an open encryption standard that offers fast encryption at 128-bit, 192-bit and 256-bit strengths.

AES is a symmetric encryption algorithm often used today to secure data in motion in both SSH and SSL/TLS.  (After a symmetric key exchange is used to perform the handshake in an SSH or SSL/TLS session, data is actually transmitted using a symmetric algorithm such as AES.)

AES is also often used to secure data at rest in SMIME, PGP, AS2, strong Zip encryption and many vendor-specific implementations.  (After a symmetric key exchange is used to unlock a key on data at rest, data is actually read or written using a symmetric algorithm such as AES.)

Rijndael is what AES was called before 2001.  In that year, NIST selected Rijndael as the new AES algorithm and Rinjdahl became known as AES.  NIST validates specific implementations of AES under FIPS 140-2, and several hundred unique implementations have now been validated under that program.

BEST PRACTICE: All modern file transfer clients and file transfer servers should support FIPS-validated AES, FIPS-validated 3DES or both.  (AES is faster, may have more longevity and offers higher bit rates; 3DES offers better backwards compatibility.)

 

Air-Gapped Networks

An air-gapped network, often referred to as an "air gap," is a highly secure and physically isolated computer network that is completely separated from unsecured or potentially compromised networks, such as the internet or other external networks. The term "air gap" signifies that there is a literal gap or physical barrier between the secure network and the outside world, preventing any direct electronic communication between them. This isolation is designed to provide the highest level of security for sensitive or critical systems and data.

Air-gapped networks are commonly found in high-security environments, such as government agencies, military facilities, critical national infrastructure, and financial institutions that deal with sensitive data and communications. Maintaining and managing air-gapped networks can be complex and costly due to the need for physical security measures and the limitations on data transfer. As a result, businesses carefully assess the trade-offs between security and operational efficiency when deciding to implement and maintain such networks.

 

AndFTP

AndFTP is a free, full-featured, interactive FTP client for Android smartphones and devices.   It was created by Lysesoft, a company specialising in Android phone file transfer client development.

AndFTP offers support for FTP, FTPS, SFTP and can remember a large number of connection profiles.  FireFTP does not yet (as of version 2.4) support integrity checks using MD5/SHA1 or file compression on the fly (i.e., “MODE Z”), but it does already support multiple languages, ESPV and IPv6.

AndFTP’s official site is http://www.lysesoft.com/products/andftp/

 

ANSI X.9

ANSI X.9 (or “ANSI/X.9”) is a group of standards commonly used with bulk data transmissions in item processing and Fed transfers.

An example of an ANSI X.9 standard is “ANSI X9.100-182-2011” which covers how XML can be used to deliver bulk data and images.

Published ANSI standards may include some technical artifacts such as XML XSD documents, but typically rely on specific maps set up in specific transformation engines to completely integrate with backend systems.



AS1

AS1 (“Applicability Statement 1”) is an SMIME-based transfer protocol that uses plain old email protocols (such as SMTP and POP3) to transmit files with end-to-end encryption and guaranteed delivery/non-repudiation (when MDNs are in use).

End-to-end encryption is accomplished through the use of asymmetric encryption keyed with the public and private parts of properly exchanged X.509 certificates.  Guaranteed delivery is accomplished through the use of strong authentication and signing, also through the use of the public and private parts of properly exchanged X.509 certificates.

AS1 is an unpopular implementation of the AS2 protocol, at least for new implementations.  Many vendors successfully sell software that supports AS2 but not AS1 or AS3.  However, AS1’s design as an email-based protocol allows many companies to implement it without investing in extra file transfer technology at their perimeters; they simply need to implement AS1 internally and make sure it can access email.

See also “AS2” for the HTTP-based variant and “AS3” for the FTP/S-based variant.

 

AS2

AS2 (“Applicability Statement 2”) is an SMIME-based transfer protocol that uses HTTP/S to transmit files with end-to-end encryption and guaranteed delivery/non-repudiation (when MDNs are in use).

There are two main reasons that AS2-based transmission systems are unpopular unless specifically requested by particular partners are complexity and cost.

In terms of complexity, AS2 configurations can involve up to four different X.509 certificates on each side of a transfer, plus hostnames, usernames, passwords, URLs, MDN delivery options, timeouts and other variables.  Configuration and testing of each new partner can be a full or multi-day affair, where simpler protocols such as FTP may require hours or minutes.  To hide as much of the configuration complexity as possible from administrators, some AS2 products (such as Cleo’s Lexicom) come with dozens or hundreds of preconfigured partner profiles, but knowledge of the underlying options is still often necessary to troubleshoot and deal with periodic updates of partner credentials or workflows.

In terms of cost, AS2 products that can connect to multiple trading partners are rarely available for less than ten thousand dollars, and the ones that ship with well-developed list of partner profiles cost much more than that.   One factor that drives up this cost is that any marketable AS2 product will be “Drummond Certified“.  The cost of high-end AS2 products is driven up by the fact that compiling and keeping up an extensive library of partner profiles in an expensive endeavor in its own right.  Implementing AS2 securely across a multiple-zone network also tends to drive up costs because intermediate AS2 gateways are often required to prevent direct Internet- or partner-based access to key internal systems.

Another factor working against voluntary AS2-based implementations is transfer speed.  The use of HTTP-based encoding and the requirement that MDNs are only compared after the complete file has been delivered often tips the operational balance in favor of other technology.

AS3 was developed, in part, to cope with AS’s slow HTTP-based encoding, but other modifications (“optional profiles“) to the AS2 protocol have also been introduced to address other limitations.  For example, the optional “AS2 Restart” feature was introduced industry-wide to cope with large files whose delivery was heretofore dependent on long-lasting, unbroken HTTP streams.

Nonetheless, AS2 is considered to be the most successful and most widely adopted of any vendor-independent file transfer protocol that builds both transmission security and guaranteed delivery into the core protocol.

The default port for AS2 is port 80 or port 443.

See also “AS1” for the email-based variant, “AS3” for the FTP-based variant and “AS2 optional profiles” for additional information about available AS2 features.

 
 

AS2 Optional Profiles

AS2 optional profiles (also “optional AS2 profiles”) are features built into the AS2 protocol but not used by every Drummond certified vendor.  However, the Drummond Group does validate seven different optional profiles (nine total) and these are briefly covered below.

Certificate Exchange Messaging (CEM) – A standard way of exchanging certificates and information about how to use them.

Multiple Attachments (MA) – Simply the ability to transmit multiple files in a single AS2 transmission.

FileName preservation (FN) – Adds metadata to AS2 transmissions to preserve original filenames.  “FN-MA” covers AS2 transmissions without MDNs and “FN-MDN” covers transmissions with MDNs.

Reliability – Provides an application standard around retry, IDs and related matters to prevent double posts.

AS2 Restart – Allows larger files, including those over 500MB, to be sent over AS2.

Chunked Transfer Encoding (CTE) – Permits transmission of data sets that are still being generated when transmission starts.

BEST PRACTICES: The most useful AS2 optional profiles for file transfer are usually MA (multiple attachments) and FN (filename preservation).  Your AS2 software should support all of these.  If you transmit files larger than a few megabytes with AS2, then AS2 restart is also a must.  Other options may be useful on a case-by-case basis.

 

AS3

AS3 (“Applicability Standard 3”) is an SMIME-based transfer protocol that uses FTP/S to transmit files with end-to-end encryption and guaranteed delivery/non-repudiation (when MDNs are in use).

AS3 is an unpopular implementation of the AS2 protocol.  Many vendors successfully sell software that supports AS2 but not AS1 or AS3.  However, AS3’s design as an FTP-based protocol allows many companies to implement it with minimal file transfer technology investments at their perimeters; they simply need to implement AS3 internally and make sure it can access a plain old FTP/S server exposed to the Internet.

See also “AS1” for the email-based variant and “AS2” for the HTTP/S-based variant.

 
 

Automated File Transfer

Automated file transfer is a term used to describe the programmatic movement of files. Typically automated business processes exist for system to system transfers either inside an organisation or between trading partners and are usually file based. Businesses usually look to adopt fully featured solutions to replace legacy scripts or unreliable manual processes reducing support overheads, error rates and increasing efficiency.

An automated file transfer solution should provide a simple, powerful interface enabling IT to create business workflows without the need for programming skills. Features should include the ability to push and pull files using a variety of file transfer protocols, to provide a wide range of options to schedule events and transform or manipulate file contents. Robust reporting and notification is key to ensuring those responsible are informed in a timely manner.

Do you have a requirement to automate current business processes? If so and you’d like to find out more about the automated file transfer solutions supplied by Pro2col, please don’t hesitate to contact us on 0333 123 1240.

B

B2B

B2B (“business to business”) is a market definition (a “space”) that covers technology and services that allow file and other transmissions to be performed between businesses (i.e., not between consumers).  B2B covers a lot of conceptual ground, from simple “file transfer” and “secure file transfer” to more sophisticated “managed file transfer” and up through traditional EDI.  In addition to overlapping with the venerable EDI space, many analysts now see that the B2B space overlaps with the EAI (“enterprise application integration”) space, the “Community Management” space (e.g., effortless provisioning) and even the hot new cloud services space.

If you are an IT administrator or someone else charged with getting a file transfer system to work, the presence of the “B2B” term in your project description should tell you to expect to use the FTP/S and SSH (SFTP) protocols, although you may also see AS2, proprietary protocols or some use of HTTP/S (especially if web services are available).

If you are a buyer of file transfer technology and services, the presence of the “B2B” term in your project requirements will limit the field of vendors who can win the business, attract the attention of traditional IT analyst firms (e.g., IDC, Forrester and Gartner) hoping to steer the business to their clients and may kick decision making further up your corporate hierarchy.

Click here for more on B2B File Transfer

 

BIC

A Bank Identifier Code (BIC) is an 8 or 11 character ISO code used in SWIFT transactions to identify a particular financial institution.   (BICs are also called “SWIFT addresses” or “SWIFT codes”.)  The format of the BIC is determined by ISO 9362, which now provides for unique identification codes for both financial and non-financial organisations.

As per ISO 9362:2009, the first 8 characters are used for organisation code (first 4 characters), organisation country (next 2 characters), and “location” code (next 2 characters).  In 11-character BICs, the last three characters are optional and denote a particular branch.  (Leaving off the last three characters or denoting them with “XXX” indicates the primary office location.)  If the 8th character in the BIC sequence is a “0”, then the BIC is a test BIC.

C

Certificate Spill

Accidental “certificate spill” is a common problem in file transfer security. It occurs when an untrained or careless individual accidentally sends the private key associated with a public/private certificate pair to someone who only needs the public component.

Certificate spill is a dangerous problem because it exposes credentials that allow unauthorised individuals to act with the identity and permission of trusted individuals and systems.

Today, it is common for a file transfer server administrator to ask an end user for “his or her certificate” so the administrator can add the end user’s public certificate credential to the user’s file transfer profile. This will allow future connections to a file transfer server to be negotiated with strong authentication in place. However, end users frequently send both their private and public keys to a request like this. This kind of certificate spill often occurs in cleartext email and is often accompanied with an infuriating note like “I don’t know exactly what you’re looking for, but here’s all the files in my certificate folder”.

A worse case occurs when an untrained administrator broadcasts both the public and private components of a server key or server certificate to every actual or potential end user “to help them connect”. (This happens more frequently than you might believe.)

To prevent certificate spills, proper training and proper deployment of technology that make it easy for end users and administrators to perform certificate exchange are both critical.

BEST PRACTICE: If you use keys or certificates to authenticate, secure and/or vouch for information, you must ensure that all personnel who handle these credentials know the difference between public and private keys and know when to use each type of credential (leaning on features available in software and systems whenever possible). Administrators of these systems should also have a short “certificate spill containment” procedure in place in case a private key is accidentally transmitted. This procedure should include assessment, communication, remediation (e.g., generate/distribute a new certificate and cancel/revoke the old one) and verification.

 

Certification (Software and Systems)

Certification of software and systems against a standard is better than having software and systems merely in “compliance” with a standard.  Certification means that a third-party agency such as NIST or the PCI Council has reviewed and tested the claim of fidelity to a standard and found it to be true.  Certifying agencies will usually either publish a public list of all certified implementations or will be happy to confirm any stated claim.

A common example of certification in the file transfer industry is “AS2 certification”.  Under this standard, Drummond Group tests various vendors’ cryptography implementations, issues a validation certificate for each that passes and lists all implementations that have passed in a public web page on the NIST site.

Certification is roughly equivalent to “validation“.

 

Certification (Training)

Individuals working in the file transfer industry frequently have earned one or more certifications through training and testing. These certifications generally fall into one of three categories:

File Transfer Security Certification: (ISC)2 and SANS certified individuals have a good understanding of security from a vendor-neutral point of view. (CISSP is an (ISC2)2 certification; CCSK is a newer vendor-neutral certification from the Cloud Security Alliance.)

File Transfer Workflow Certification: Surprisingly little exists today for certified training in the area of workflow design, implementation or operations. PMP certified project managers and Six Sigma trained personnel may fall in this category today.

File Transfer Vendor Certification: Cisco Firewall Specialists and Sterling Integrator Mapping Specialists are examples of vendor-specific certification programs for individuals.

 

Certified File Transfer Professional (CFTP)

CFTP is recognised certification for file transfer professionals approved by the CPD. To find out more about the course please visit the CFTP website.

 

Check 21

“Check 21” is the common name for the United States’ Check Clearing for the 21st Century Act, a federal law enacted in 2003 that enabled banks to phase out paper check handling by allowing electronic check images (especially TIFF-formatted files) to serve all the same legal roles as original paper checks.

Check 21’s effect on the file transfer industry has been to greatly increase the size of files transmitted through banks, as check images are frequently large and non-compressible.

 

Clear Text Password

A “clear text password” is a common problem in file transfer security.   It is a dangerous problem because it exposes credentials that allow unauthorised individuals to act with the identity and permission of trusted individuals and systems.

The problem happens in at least five different areas:

Clear text password during input: This problem occurs when end users type passwords and those passwords remain visible on the screen after being typed.  This exposes passwords to “shoulder surfing” and others who may share a desktop or device.  Most applications today show asterisks while a password is being typed.  Modern implementations (such as the standard iPhone password interface) show the last character typed for a few seconds, then replace it with an asterisk.

Clear text password during management: This problem occurs when an operator pulls up a connection profile and can read the password off the profile when he/she really only should be using an existing profile.  To avoid this problem application developers need to code a permissions structure into the management that permits use without exposing passwords.  Application developers also need to be careful that passwords are not accidentally exposed in the interface, even under a “masked” display.  (Perform a Google search on “behind asterisks” for more information on this.)

Clear text password during storage: This problem happens when configuration files, customer profiles or FTP scripts are written to disk and no encryption is used to protect the stored data.  Application developers can protect configuration files and customer profiles, but when FTP scripts are used, alternate authentication such as client keys are often used instead of passwords.

Clear text password in trace logs: This problem occurs when passwords are written into trace logs.  To avoid this problem application developers often need to special-case code that would normally dump clear text passwords to write descriptive labels like “*****” or “7-character password, starting with X” instead.

Clear text password on the wire: This problem occurs when passwords are sent across a network.  To avoid this problem secure transport protocols such as SSL/TLS or SSH are often used.  The most frequent cause of this problem is not application defects but operator error: when an administrator accidentally configures a client to connect without using a secure protocol, credentials are often sent in the clear.

BEST PRACTICE: All modern file transfer clients and file transfer servers should steer clear of these problems; these are entry-level security concerns and several application security design guidelines (e.g., “Microsoft’s Design Guidelines for Secure Web Applications” and the SANS Institute’s “Security Checklist for Web Application Design”) have covered this for years.

 

Community Management

“Community Management” is a marketing term used to describe technology and services that use external authentication technology to provision (or “onboard“) users or partners using rich profile definitions and which allows users and partners to maintain elements of their own profiles (e.g., contacts, email addresses, member users with limited rights, etc.).

File transfer and/or EDI solutions that provide community management capabilities are either VANs or direct competitors to VANs.

 

Compliance

“Compliance” to a standard is weaker than “validation” or “certification” against a standard.  Compliance indicates that a vendor recognizes a particular standard and has chosen to make design decisions that encompass most, if not all, of the standard.

When a vendor has implemented all of the required standard, that vendor will frequently upgrade their statement to “completely compliant” or “guaranteed compliant.”

A common example of compliance in the file transfer industry is a claim to “SFTP interoperability.”  Today, there is no universally-recognized third-party laboratory that will test, validate and stand behind these claims, but there are hundreds of vendors who claim that their products are compliant with various known SFTP standards, such as compatibility w/ OpenSSH and/or fidelity to RFC 4250.

Another common example of compliance in the file transfer industry is “FIPS compliance“.  This slippery phrase often indicates that the cryptography used a particular solution implements some or all the algorithms specified in FIPS 140-2 (e.g., AES) but that the underlying cryptography component has not been validated by a third party.

 

Control File

A control file is a special file that is sent along with one or more data files to tell applications that handle the data files how to handle them.  Control files are typically created by the same application that original sends files into a file transfer system.

The most common type of control file is a “trigger file“.  Trigger files are used to initiate further processing or retransmission of files already uploaded to or present on a separate file transfer system.

Other types of control files are often used to communicate final file names, intermediate processing applications, conditions of transfer/processing success or other metadata.

There is no standard format for control files, but many control files are either simple “flat files” with one file configuration entry in each line or simple XML files.

There are three general expectations made on file transfer systems by applications that submit control files.

Send and Forget (No Control File): When applications send files into a file transfer system they rely on the file transfer system to correctly determine when files can safely be processed or retransmitted.   This is often a safe assumption, especially when each file can be processed on its own and each file can be processed immediately.

Send and Commit: The use of a control file allows applications to commit a group of transmitted files to final acceptance, further processing or retransmission.  The use of a control file is often used here when two or more files need to be submitted together to allow final acceptance, further processing or retransmission.

Send, Commit and Confirm: If a file transfer application returns a status code or file after acting on a submitted control file then it can confirm both the original files sent and the contents of the control file.  The use of the extra confirmation step is often used when the submitting application wants to check up on the transfer status or history of its submitted files.  (In this case, either the submitted control file or resulting confirmation file will contain an ID the submitting application can use to perform lookups after the initial confirmation.)

 

Core FTP

Core FTP is a secure FTP software brand that includes a free desktop FTP client (Core FTP LE), a commercial FTP client (Core FTP Pro) and an FTP server (Core FTP Server).

 

CRC

CRC (“cyclic redundancy check”) is an early data integrity check standard (a.k.a. “hash”).  Most CRC codes are 32-bit numbers and are usually represented in hexadecimal format (e.g., “567890AB”).

CRC was commonly used with modem-based data transfer systems because it was cheap to calculate and fast on early computers.   Its use carried over into FTP software and some vendors still support CRC in their applications today (e.g., FTP’s unofficial “XCRC” command).

However, CRC is not considered a “cryptographic quality” integrity check because it is trivial for an attacker to create bad data that bears the same CRC code as a set of good data.

BEST PRACTICE: Modern file transfer deployments should use FIPS validated SHA-1 or SHA-2 implementations for integrity checks instead of CRC.  However, FTP software that supports the XCRC command can be used to supplement stronger integrity checks, particularly over unreliable connections.  (i.e., If your application calculates a bad CRC code for a particular transfer, you can avoid the effort of calculating a more expensive SHA-1 or SHA-2 hash.)

 

Cut-Off Time

In file transfer operations, a cut-off time is a specific time of day a processor must receive a batch or file by so processing can begin on that day. The processor, not the sender, decides the cut-off time.

For example, if a processer publishes a cut-off time of 5pm, then a file received at 4:59pm will be processed today, but a file received at 5:01pm may not. (Special dispensation is often granted to particular files on an operator-by-operator basis.)

 

Cyber Liability

Cyber liability is the risk posed by conducting business over the Internet, over other networks or using electronic storage technology.  Insurance can be bought and “risk based” security strategies can be used to mitigate against both the first- and third-party risks caused by cyber liability.

A “first party” cyber liability occurs when your own information is breached.  For example, a hack that results in the exposure of your own trade secrets would create a first party cyber liability.

A “third party” cyber liability occurs when customer or partner information your organization has promised to keep safe is breached.  For example, a hack that results in the exposure of your customer’s Social Security numbers would create a third party cyber liability.

Companies have compelling reasons to avoid both types of cyber liability, but third party cyber liabilities can be devastating.  First party cyber liabilities threaten a company’s competitiveness, but third party cyber liabilities often ruin brands, open the door to million-dollar lawsuits and trigger statutory fines (e.g., HIPAA HITECH’s $50,000 per-incident “willful neglect” fine).

File transfer technology frequently transmits information whose disclosure would lead to first- or third-party cyber liabilities.

BEST PRACTICE: File transfer technology, policy and procedure decisions should be made under the auspices of a risk-based security strategy that takes into account both first- and third-party cyber liabilities.

 

Cyberduck

Cyberduck is a free open source file transfer client for Windows and Macintosh desktops.

Cyberduck offers support for FTP, FTPS, SFTP, Amazon S3, Rackspace Cloud Files, Google Storage for Developers and Amazon Cloud Front.  Cyberduck features sychronization across multiple server types and support for many languages.

Cyberduck’s official site is cyberduck.ch.  It is licensed under GNU.

 

D

Data Controller

This is the individual within an organisation who is responsible for the data. The data controller defines the data collected and the reasons for processing.

 

Data Leak Prevention

Data Loss Prevention (DLP) is a set of strategies, tools, and technologies designed to protect sensitive data from unauthorised disclosure or loss. DLP aims to prevent the unauthorised transmission, sharing, or exposure of sensitive data, such as intellectual property, financial data, personally identifiable information (PII), and confidential business data. The goal is to maintain data confidentiality and comply with regulatory requirements while allowing legitimate data usage within an organisation.

 
 

Data Portability

Under GDPR, individuals have the right to have their personal data transferred to another system or organisation.
 
 

Data Processor

Someone who processes data on behalf of the Data Controller.
 


Data Protection Act

The Data Protection Act of 1998 was brought into force on March 1st 2000. Introduced to give UK citizens the right to access personal information held by ‘data controllers’ (any individual within an organisation handling personal data) within the United Kingdom, the Data Protection Act also details principles concerning the way in which this sensitive data is managed.
 
There are eight core principles covered under the Data Protection Act. These are as follows:

Personal data should be processed fairly and lawfully.
Data should only be obtained for specified purposes and should not be further processed in a manner incompatible with these purposes.
Personal data should be adequate relevant and not excessive in relation to the purposes for which they were collected.
Personal data should be accurate and where necessary kept up to date.
Personal data should not be kept longer than is needed for its intended purpose.
Personal data should be processed in accordance with the rights of the individual, which the information concerns.
Appropriate measures should be taken against unauthorised or unlawful processing or destruction of personal data.
Personal data should not be transferred outside the European Economic Area (the EU states plus Liechtenstein, Iceland and Norway).
The principle outlined within the Data Protection Act, applicable to the implementation of secure file transfer provisions is the seventh principle. This states that;

“Having regard to the state of technological development and the cost of implementing any measures, the measures MUST ensure a level of security appropriate to – the harm that might result from such unauthorised or unlawful processing or accidental loss, destruction or damage as are mentioned in the seventh principle AND the nature of the data protected.”

Therefore all organisations, as governed by UK law, must ensure that adequate safeguards are in place regarding the storage and processing of personal data.

Our specialists at Pro2col can help you to source and implement a secure file transfer solution to suit your business requirements and align the processing of data, in accordance with The Data Protection Act. Please contact us on 0333 123 1240 for more information.

 

Data Protection by Design & by Default

This is an overarching principle of GDPR. It means building data protection into business processes, products and services from the outset.



Data Protection Impact Assessment (DPIA)

This is a document that describes the nature of the data, the purpose of the transfer, how it is performed and the security configuration. A DPIA is a key requirement of GDPR.
 
 

Data Residency 

Data residency refers to the concept that data is subject to the laws and regulations of the country or region in which it is physically located or stored. It pertains to the legal and compliance aspects of data handling and storage, and it can have implications for file transfer and storage technologies in the following ways:

  • Data Privacy and Protection Regulations
  • Cross-Border Data Transfers
  • Cloud-Based MFT and Data Residency
  • Encryption and Data Security
  • Data Auditing and Compliance Reporting

In summary, data residency is a critical consideration for organisations using Managed File Transfer solutions, especially when dealing with sensitive or regulated data. MFT software should offer features and options that allow organisations to comply with data residency regulations by ensuring that data is transferred, stored, and accessed in accordance with local and international laws. Or, where hosted in the cloud, an option of service location to suit the needs of the organisation.


Data Subject

This is the individual that the data is about.



DEP

DEP is sometimes used an abbreviation for “Data Exchange Partner”.
 

 

DEPCON

DEPCON is the common name for the Unisys Distributed Enterprise Print Controller software. This software is often deployed in financial data centers that use it to break apart and distributed aggregated reports. As more and more print jobs moved to electronic distribution formats, file transfer technology was frequently applied to either handle incoming report batches or to deliver the final product.



Deprovisioning

Deprovisioning is the act of removing access from and freeing up resources reserved by end users and their file transfer workflows.  Rapid removal of access upon termination or end of contract is key to any organisation. Freeing up of related resources (such as disk space, certificates, ports, etc.) is also important, but often follows removal of access by a day or more (especially when overnight processes are used to free up resources).

The act of deprovisioning should always be audited, and the audit information should include the identity of the person who authorised the act and any technical actions the system took to deprovision the user.

Most file transfer servers today allow administrators to chain up to Active Directory (AD), LDAP or RADIUS or other external authentication to allow centralised management (and thus deprovisioning) of authentication and access.

“Rollback” of deprovisioned users is a competitive differentiator across different file transfer servers, and varies widely from “just restore credentials”, through “also restore access” and on to “also restore files and workflows”.

BEST PRACTICE: Whenever possible, implementers of file transfer technology should use an external authentication source to control access and privileges of end users.  When an external authentication source is used to control authentication in this manner, deprovisioning on the file transfer server occurs at the moment the user is disabled or deleted on the central authentication server.



DES

DES (“Digital Encryption Standard”) is an open encryption standard that offers weak encryption at 56-bit strength.  DES used to be considered strong encryption, but the world’s fastest computers can now break DES in near real time.  A cryptographically valid improvement on DES is 3DES (“Triple DES”) – a strong encryption standard that is still in use.

DES was one of the first open encryption standards designed for widespread use in computing environments, and it was submitted by IBM in 1975 based on previous work on an algorithm called Lucifer.  It was also the first encryption algorithm to be specified by a “FIPS publication”: FIPS 46 (and subsequent FIPS 46-1, 46-2 and 46-3 revisions).

 

Digital Rights Management (DRM)

Digital Rights Management (DRM) is a set of technologies, policies, and access control mechanisms used to protect and manage digital content, such as documents, media files, software, and eBooks. The primary goal of DRM is to enforce and protect the intellectual property rights of content creators and owners by controlling how their digital assets are used, copied, and distributed. DRM systems are commonly used to prevent unauthorised access, copying, sharing, and piracy of digital content.

Key aspects and components of DRM include:

  • Content Encryption
  • Access Control
  • Copy Protection
  • Expiration and Licensing
  • Secure Distribution
  • Digital Watermarking
  • Monitoring and Reporting
  • Compatibility
  • Prevention of Printing 

While DRM offers content owners and creators a means to protect their intellectual property and revenue streams, it has also been the subject of debate and controversy due to concerns about its impact on user privacy, fair use rights, and potential limitations on content sharing and interoperability. As a result, the implementation of DRM varies, and its use is often a balance between content protection and user experience.

 

Document Definition

In file transfer, a “document definition” typically refers to a very specific, field-by-field description of a single document format (such as an ACH file) or single set of transaction data (such as EDI’s “997” Functional Acknowledgement).

Document definitions are used in transformation maps and can often be used outside of maps to validate the format of individual documents.

The best known example of a document definition language today is XML’s DTD (“Document Type Definition”).

Many transformation engines understand XML DTDs and some use standard transformation mechanisms like XSLT (“XML Transformations”).  However most transformation engines depend on proprietary mapping formats (particularly for custom maps) that prevent much interoperability from one vendor to another.

 

Double Post

A “double post” is the act of sending a file in for processing twice on a production system.

Most operators consider a “double post” to be far worse than a missing file or missing transmission, because files sent in for internal processing often cannot be cleanly backed out.  Double post violations involving hundreds or thousands of duplicate payment, payroll and provisioning transactions are relatively common experiences and are feared by all levels of management because they take considerable time, expense and loss of face to clean up.

There are many technologies and technologies used today to guard against double-posts.  These include:

Remembering the cryptographic hashes of recently transmitted files.  This allows file transfer software to catch double-posts of identical files and to quarantine and send alerts appropriately.

Enforcing naming and key-record schemes on incoming files.  This often prevents external systems from blindly sending the same file or batch of records again and again.

Synchronizing internal knowledge of records processed with external file transfer systems.  This advanced technique is EDI-ish in nature, as it requires file transfer technology to crack, read and interpret incoming files.  However, it allows more sophisticated handling of exceptions (such as possible “ignore and go on” cases) better than simpler “accept/reject file” workflows.


Drummond Certified

In the file transfer industry, “Drummond Certified” typically indicates that the AS2 implementation in a particular software package has been tested and approved by the Drummond Group.

Most file transfer protocols follow RFCs, and AS2 is no exception.  (AS2 is specified in RFC 4130, and the “MDNs” AS2 relies on are specified in RFC 3798).  However, the AS2 protocol and Drummond certification are closely tied together like no other file transfer protocol or certification because of Wal-Mart, the world’s largest retailer.  In 2002 Wal-Mart announced that it would be standardizing partner communications on the AS2 standard, and that companies that wished to connect to it must have their AS2 software validated by Drummond.  As Wal-Mart and its massive supply chain led, so followed the rest of the industry.

There are two levels of tests in Drummond certification.  Interoperability is the basic level against which all products must test and pass.  There is also a second level of “optional profile” tests which check optional but frequently desirable features such as AS2 Restart.  There are also minor implementation differences, such as certificate import/export compatibility, that, combined with optional AS2 profiles, allow for significant differences between Drummond certified implementations, though the core protocol and basic options are generally safe between tested products.

Not every product that claims its AS2 implementation is Drummond certificated will itself be entirely Drummond certified.  Some software, such as Ipswitch’s MOVEit and MessageWay software and Globalscape’s EFT software, make use of third-party Drummond certified libraries such as n Software’s IP*Works! EDI Engine.  In those cases, look for the name of the library your file transfer vendor uses instead of the file transfer vendor product on Drummond’s official list.


Drummond Group

The Drummond Group is a privately held test laboratory that is best known in the file transfer industry as the official certification behind the AS2 standard.  See “Drummond Certified” for more information about the AS2 certification.

The Drummond Group also offers AS1 and ebXML validation, quality assurance, and other related services.

E

EAI

EAI is short for “Enterprise Application Integration“, a methodology which balances seamless experience across heterogeneous enterprise applications and datasets of various origins, scope and capability with the need to make major changes to those applications or datasets. 



ECBS

The European Committee for Banking Standards (“ECBS”) was a standards body that focused on European banking technology and infrastructure.  It was formed in 1992 and disbanded in 2006; it has since been replaced by the European Payments Council.

It is still common to see references to the ECBS in GSIT and PeSIT documentation.

 

EFSS / CCP

EFSS stands for Enterprise File Synchronisation and Sharing, while CCP stands for Content Collaboration Platform. These two terms are related and are often used interchangeably, as they refer to technologies and solutions that facilitate the sharing, synchronisation, and collaboration of digital content within organizations.

Enterprise File Sync and Share (EFSS):

EFSS refers to software and services designed to enable organisations to securely store, synchronize, share, and collaborate on files and documents across multiple devices and with different users or teams, while the file or document remains securely in-place on the EFSS solution..

Content Collaboration Platform:

CCP is a broader term that encompasses EFSS but extends beyond file sharing and synchronization to include a wider range of content-related collaboration tools and capabilities. In addition to file sharing and synchronisation, CCP solutions may offer features such as content management, workflow automation, real-time collaboration on documents, digital asset management, and integrations with third-party applications.

In summary, EFSS is a subset of CCP, focusing specifically on file synchronization and sharing within an enterprise context. CCP, on the other hand, encompasses a broader spectrum of content-related collaboration tools and capabilities, making it suitable for organisations with more diverse and complex content collaboration needs. Both EFSS and CCP solutions are valuable for improving productivity and collaboration within modern workplaces.

 

Electronic Data Interchange (EDI)

EDI is a computer-to-computer exchange of business documents. This exchange is based on a standard electronic format which allows business partners to interact.

 

Enterprise Application Integration

Enterprise Application Integration (“EAI”) is a methodology which balances seamless experience across heterogeneous enterprise applications and datasets of various origins, scope and capability with the need to make major changes to those applications or datasets.

Today, EAI often uses ESB (“Enterprise Service Bus”) infrastructure to allow these various applications to communicate with each other.  Before ESB, MOM (“Message-Oriented Middleware”) would have been used instead.

Today’s convergence of file transfer and EAI systems was foretold by Steve Cragg’s 2003 white paper entitled “File Transfer for the Future – Using modern file transfer solutions as part of an EAI strategy”.   In that paper, Cragg wrote that, “judicious use of file transfer in its modern form as part of an overall EAI strategy can reduce overall business risk, deliver an attractive level of ROI, speed time to market for new services and enable new business opportunities quickly (for example B2B).”

 

Enterprise Service Bus

An Enterprise Service Bus (“ESB”) is a modern integration concept that refers to architectural patterns or specific technologies designed to rapidly interconnect heterogeneous applications across different operating systems, platforms and deployment models.

ESBs include a set of capabilities that speed and standardise a Service-Oriented Architecture (“SOA”), including service creation and mediation, routing, data transformation, and management of messages between endpoints.

With the rise of SOA in the mid-2000’s, ESBs took over from MOM (“Message-Oriented Middleware”) as the leading technology behind EAI (“Enterprise Application Integration”).

Examples of commonly deployed ESBs include MuleSoft’s open source Mule ESB, IBM WebSphere ESB, Red Hat JBoss and Oracle ESB.  The Java Business Integration project (“JBI”) from Apache is also often referred to as an ESB.

 

ESB

ESB is short for “Enterprise Service Bus“, a modern integration technology used to quickly tie heterogeneous applications across different operating systems, platforms and deployment models.  

 

European Payments Council

The European Payments Council (“EPC”) coordinates European inter-banking technology and protocols, particularly in relation to payments.  In 2011 the EPC boasted that it processed 71.5 billion electronic payment transactions.

The EPC assumed all the former duties of the European Committee for Banking Standards (“ECBS”) in 2006.  It is now the major driver behind the Single Euro Payments Area (SEPA) initiative.

The official site of the EPC is http://www.europeanpaymentscouncil.eu/

 

External Authentication

External authentication is the use of third-party authentication sources to decide whether a user should be allowed access to a system, and often what level of access an authenticated user enjoys on a system.

In file transfer, external authentication frequently refers to the use of Active Directory (AD), LDAP or RADIUS servers, and also refer to the use of various single sign on (SSO) technologies.

External authentication sources typically provide username information and password authentication.  Other types of authentication available include client certificates (particularly with AD or LDAP servers), PINs from hardware tokens (common with RADIUS servers) or soft/browser tokens (common with SSO technology).

External authentication sources often provide file transfer servers with the full name, email address and other contact information related to an authenticating user.  They can also provide group membership, home folder, address book and access privileges.  When external authentication technology involves particularly rich user or partner profiles and allows users and partners to maintain their own information, then the external authentication technology used to onboard users and partners is often called “Community Management” technology.

See also “provisioning” and “deprovisioning“.

Extreme File Transfer

Here at Pro2col we’re increasingly being asked by our clients to help them move large data sets. The amount of data as everyone knows is increasing in size, as are file sizes. It is now common in our discussions to talk about files of many Gigabytes in size. The challenge this presents however is how to move the data from point A to B as invariably we’re finding that companies are needing to move these volumes of data halfway around the world. Welcome to Extreme File Transfer! Extreme file transfer is an expression which has become more widely adopted in recent times, but what does it mean? IDC describes it as;
 
“Extreme file transfer requirements come from the need to solve problems around file size. In this case, the file may simply be too big and the target too far away to reliably deliver over TCP/IP because of shortcomings in this networking protocol. In other cases, there is a problem delivering a file within an allowed time window, and therefore, there is a need to find an alternative approach.” [“IDC competitive review of MFT Software” October 2010]
 
IDC pretty much hits the nail on the head here, although I’d place a little more emphasis on the infrastructure over which extreme file transfer takes place. The efficiency of TCP based file transfer protocols such as FTP reduce dramatically when the round trip time between client and server increases due to latency. The result is an increase in packet loss and rapidly decreasing throughput. For file based workflows there are some great solutions available which addresses these issues. Vendors have taken various approaches such as breaking files into smaller bits, using hardware acceleration, compression, synchronisation, shipping a HDD but the most effective are those that utilise a variation of the UDP protocol. UDP itself isn’t a very successful protocol but when some controls are added it becomes the most effective way of moving extremely large files over extremely challenging connections. These re-engineered protocols form the cornerstone of the Extreme File Transfer solutions. The vendors in this space have designed their solutions to enable the use of the extreme file transfer protocol for these typical business requirements:
  • Disaster recovery and business continuity
  • Content distribution and collection, e.g., software or source code updates, or CDN scenarios
  • Continuous sync – near real time syncing for ‘active-active’ style HA
  • Supports master slave basic replication, but also more complex bi-directional sync and mesh scenarios
  • Person to person distribution of digital assets
  • Collaboration and exchange for geographically-distributed teams
  • File based review, approval and quality assurance workflows
If your business needs to transfer extremely large files or volumes of data speak to our team of expert consultants. As independent file transfer specialists since 2003 we’re able to provide an objective vendor agnostic view to finding the right solution for your file transfer needs. Speak to one of our consultants now on 0333 123 1240.

F

FDIC

The FDIC (“Federal Deposit Insurance Corporation”) directly examines and supervises more than 4,900 United States banks for operational safety and soundness.  (As of January 2011, there were just less than 10,000 banks in the United States; about half are chartered by the federal government.)

As part of its bank examinations, the FDIC often inspects the selection and implementation of file transfer technology (as part of its IT evaluation) and business processes that involve file transfer technology (as part of its overall risk assessment).

The FDIC’s official web site is www.fdic.gov.

See also: “FFIEC” (umbrella regulation, including state chartered banks), “the Fed” (U.S. central bank), “NCUA” (credit unions), “OCC” (national and foreign banks) and “OTS” (savings and loans).

 

Federal Reserve

The Federal Reserve (also “the Fed”) is the central bank of the United States.  It behaves like a regulatory agency in some areas, but its main role in the file transfer industry is as the primary clearinghouse for interbank transactions batched up in files.  Nearly every bank or bank service center has a file transfer connection to the Fed.

As of January 2011 there were exactly three approved ways to conduct file transfer with the Federal Reserve.  These were:

  • Perform interactive file transfer through a web browser based application.  This has serious disadvantages to data centres which try to automate key business processes as the Fed’s interactive interface has proven to be resistant to screen scraping automation, and no scriptable web services are supported.
  • Perform automated file transfers through IBM’s Sterling Commerce software.  This is the Fed’s preferred method for high volumes.   Since Sterling Commerce software is among the most expensive in the file transfer industry, many institutions use a small number of installations of Sterling Commerce Connect:Direct for their Fed connections and use other automation software to drive Fed transfers through Perl scripts, Java applications or other methods.
  • Perform automated file transfers through Axway’s Tumbleweed software.  This is the Fed’s preferred method for medium volumes.  As with Connect:Direct, Tumbleweed Fed connections are often minimised and scripted by third-party software to reduce the overall cost of a Fed-connected file transfer installation.

The Federal Reserve’s official web site is www.federalreserve.gov.

See also: “FFIEC” (umbrella regulation, including state chartered banks), “FDIC” (federally chartered banks), “NCUA” (credit unions), “OCC” (national and foreign banks) and “OTS” (savings and loans).


FFIEC

The FFIEC (“Federal Financial Institutions Examination Council”) is a United States government regulatory body that ensures that principles, standards, and report forms are uniform across the most important financial regulatory agencies in the country.
The agencies involved include the Federal Reserve (“the Fed”), the Federal Deposit Insurance Corporation (FDIC), the National Credit Union Administration (NCUA), the Office of the Comptroller of the Currency (OCC), and the Office of Thrift Supervision (OTS).  Since 2006, a State Liaison Committee (SLC) has also been involved; the SLC’s membership includes the Conference of State Bank Supervisors, the American Council of State Savings Supervisors, and the National Association of State Credit Union Supervisors.
The FFIEC’s official web site is www.ffiec.gov.

 

FIPS 140-2

FIPS 140-2 is the most commonly referenced cryptography standard published by NIST.  “FIPS 140-2 cryptography” is a phrase used to indicate that NIST has tested a particular cryptography implementation and found that it meets FIPS 140-2 requirements.
Among other things, FIPS 140-2 specifies which encryption algorithms (AES and Triple DES), minimum bit lengths, hash algorithms (SHA-1 and SHA-2) and key negotiation standards are allowed in U.S. federal applications.  (Canada also uses this standard for its federal standard – that is why the official FIPS 140-2 validation symbol is a garish mashup of the American blue/white stars and the Canadian red/white maple leaf.)
Almost all modern cryptographic modules, whether built in hardware or software, have been FIPS 140-2 validated.  High quality software implementations are also an integrated component of most modern computing platforms, including operating systems from Microsoft, Java runtime environments from Oracle and the ubiquitous OpenSSL library.
Almost all file transfer applications that claim “FIPS validated cryptography” make use of one or more FIPS validated cryptographic libraries, but are not themselves entirely qualified under FIPS.  This is not, by itself, a security problem: FIPS 140 has a narrow focus and other validation programs are available to cover entire applications.
FIPS 140-2 will soon be replaced by FIPS 140-3, but implementations validated under FIPS 140-2 will likely be allowed for some period of time.
 


FIPS 140-3

FIPS 140-3 will soon replace FIPS 140-2 as the standard NIST uses to validate cryptographic libraries. The standard is still in draft status, but could be issued in 2011.

FIPS 140-2 has four levels of security: most cryptographic software uses “Level 1” and most cryptographic hardware uses “Level 3”.  FIPS 140-3 expands that to five levels, but the minimum (and thus most common) levels for software and hardware will likely remain Levels 1 and 3, respectively.

 

FIPS Compliant

“FIPS compliant” is a slippery phrase that often indicates that the cryptography used in a particular solution implements some or all the algorithms specified in FIPS 140-2 (e.g., AES) but that the underlying cryptography component has not been validated by NIST laboratories. “FIPS validated” is much stronger statement.

 

FIPS Validated

“FIPS validated” is a label that indicates that the cryptography used in a particular solution implements some or all the algorithms specified in FIPS 140-2 (e.g., AES) and that the underlying cryptography component has been validated by NIST laboratories.  See “FIPS compliant” for a weaker statement.

 

Firefox

Mozilla’s Firefox is a free, open source web browser that offers a similar browsing experience across a wide variety of desktop operating systems, including Windows, Macintosh and some Linux variants.

As of December 2010, Firefox held about 30% of the desktop browser market, making it the #2 browser behind Internet Explorer.  Firefox uses an aggressive auto-update feature that ensures that most users are running the most recent major version of the browser within three months of release.

Firefox’s native FTP capabilities allow it to connect to FTP servers using passive mode and download specific files as anonymous or authenticated users with passwords.   Many advanced Firefox users use the free FireFTP Firefox extension to add the ability to upload files, connect using FTPS, connect using SFTP and browse local and remote directory structures.

Firefox’s official web site is www.mozilla.com/en-US/firefox/.

BEST PRACTICE: All credible file transfer applications that offer browser support for administrative, reporting or end user interfaces should support the Firefox web browser, and file transfer vendors should commit to supporting any new Firefox browser version within a few months of its release.   In addition, file transfer vendors that offer FTP, FTPS and/or SFTP support in their server products should support the FireFTP extension to Firefox.

 

FireFTP

Mime Čuvalo’s FireFTP is a free, full-featured, interactive FTP client that plugs into Mozilla Firefox as an add-on.

FireFTP offers support for FTP, FTPS, SFTP and can remember a large number of connection profiles.  FireFTP supports integrity checks using MD5 and SHA1, file compression on the fly (i.e., “MODE Z”), support for most FireFox platforms, support for multiple languages and IPv6.

FireFTP’s official site is fireftp.mozdev.org.

BEST PRACTICE: File transfer vendors that offer FTP, FTPS and/or SFTP support in their server products should support the FireFTP extension to Firefox.

 

Firewall Friendly

A file transfer protocol that is “firewall friendly” typically has most or all of the following attributes:

1) Uses a single port
2) Connects in to a server from the Internet
3) Uses TCP (so session-aware firewalls can inspect it)
4) Can be terminated or proxied by widely available proxy servers

For example:

Active-mode FTP is NOT firewall friendly because it violates #1 and #2.
Most WAN acceleration protocols are NOT firewall friendly because they violate #3 (most use UDP) and #4.
SSH’s SFTP is QUITE firewall friendly because it conforms to #1,2 and 3.
HTTP/S is probably the MOST firewall friendly protocol because it conforms to #1, 2, 3 and 4.

As these examples suggest, the attribute file transfer protocols most often give up to enjoy firewall friendliness is transfer speed.

When proprietary file transfer “gateways” are deployed in a DMZ network segment for use with specific internal file transfer servers, the “firewall friendliness” of the proprietary protocol used to link gateway and internal server consists of the following attributes instead:

1) Internal server MUST connect to DMZ-resident server (connections directly from the DMZ segment to the internal segment are NOT firewall friendly)
1) SHOULD use a single port (less important than #1)
3) SHOULD uses TCP (less important than #2)


FTP File Transfer

The FTP File Transfer Protocol is a method used to transfer files from one computer to another through a network whether that’s an internal network (from one computer to another within the same network) or more commonly a Wide Area Network such as the Internet.

An FTP site is a server, hosted on the Internet and used as an exchange area for uploading and downloading files to and from. FTP sites are accessed using a software program known as an FTP Client. All FTP sites will have a hostname and this, along with a username and password assigned to you by the FTP site administrator, will be required to connect the FTP client to the site.

Once connected to the FTP Site, the FTP Client allows the user to browse through files and folders on both their personal computer and the FTP site. Files can then be selected and either uploaded to the FTP site or downloaded from the FTP site.

FTP is not a particularly simple file transfer protocol to use and has a number of drawbacks such as high latency (slow), a lack of reporting and no real security of data but it is still widely used as it is a cheap, often free solution.

Pro2col offers a wide range of secure FTP alternative solutions – please contact us on 0333 123 1240 if you would like to find out more.

 

FTP with PGP

The term “FTP with PGP” describes a workflow that combines the strong end-to-end encryption, integrity and signing of PGP with the FTP transfer protocol.  While FTPS can and often should be used to protect your FTP credentials, the underlying protocol in FTP with PGP workflows is often just plain old FTP.

BEST PRACTICE: (If you like FTP with PGP.) FTP with PGP is fine as long as care is taken to protect the FTP username and password credentials while they are in transit.  The easiest, most reliable and most interoperable way to protect FTP credentials is to use FTPS instead of non-secure FTP.

BEST PRACTICE: (If you want an alternative to FTP with PGP.) The AS1, AS2 and AS3 protocols all provide the same benefits of FTP over PGP, plus the benefit of a signed receipt to provide non-repudiation.  Several vendors also have their own way to provide the same benefits of FTP with PGP without onerous key exchange, without a separate encrypt-in-transit step or with streaming encryption; ask your file transfer vendors what they offer as an alternative to FTP with PGP.


FTPS File Transfer

FTPS File Transfer, FTP Secure or FTP-SSL as it can be referred to, is a secure means of sending data over a network. Often misidentified as SFTP (an independent communications protocol in its own right), FTPS describes the sending of data using basic FTP run over a cryptographic protocol such as SSL (Secure Socket Layers) or TLS (Transport Layer Security).  The default port for FTPS is Port 21 for explicit mode or Port 990 for implicit mode .
Cryptographic protocols ensure that sections of the connection established between a client and server are encrypted, thus maintaining the security and integrity of the data sent. They similarly use a public/private key authentication system to encrypt data before it is sent and decrypt it once it has been received. Therefore if the data stream were interrupted during transmission, any documents in transit would be illegible to hackers or eavesdroppers.
 

G

General Data Protection Regulation (GDPR)

GDPR is the new EU regulation for handling people’s personal data. It stands for ‘General Data Protection Regulation’ and comes into force on the 25th May 2018. This stringent set of security measures relate to how and where personal data is collected, handled and used. By reinforcing individuals’ rights and giving them back control, it’s hoped that the General Data Protection Regulation (GDPR) will restore confidence and strengthen the EU internal market.

GDPR contains 99 articles relating to all aspects of data protection. Some key elements include:

Data Protection by Design & by Default: This is at the heart of the General Data Protection Regulation (GDPR) and means building data protection into business processes, products and services from the outset.

Data storage, accessibility and processing: These require impact assessments and appropriate security measures, plus record keeping and regular audits.

Data Protection Impact Assessments (DPIA): A document that describes the nature of the data, the purpose of the transfer, how it is performed and the security configuration.

Consent: This requires organisations to give a clear explanation of what they will do with the data. The user must acknowledge agreement and this must be kept on record.

Right to erasure: This is where individuals can request that their personal data is erased permanently.

Subject access request (SAR): The data subject has the right to request all personal data a data controller has on them and this includes their supply chain.

Data portability: Individuals have the right to have their personal data transferred to another system or organisation.

Our managed file transfer specialists at Pro2col can help you to source and implement a secure file transfer solution to suit your business requirements and align the processing of data, in accordance with some articles of General Data Protection Regulation (GDPR). Please contact us on 0333 123 1240 for more information.

 

Gramm-Leach-Bliley (GLBA)

The Gramm-Leach-Bliley Act of 1999, also known as The Financial Modernisation Act, details regulations that financial institutions must be adhered to, in order to protect consumers’ financial information. The GLBA law governs all financial institutions that hold what is classed as ‘personal data’ including, insurance companies, security firms, banks, credit unions and retailers providing credit facilities.

Gramm-Leach-Bliley Rules and Provisions
The privacy requirements set out in GLBA are broken down into three distinct elements; the Financial Privacy Rule, Safeguards Rule and Pretexting Provisions.

The Financial Privacy Rule – Governs the collection of consumers’ private financial data by financial institutions, also including companies that deal with such information. It requires all financial institutions to provide privacy notices to their customers prior to the establishment of a relationship. Such privacy notices should also detail the institutions’ information sharing practices and give consumers the right to limit the sharing of their information in certain instances.

The Safeguards Rule – requires all financial institutions to record and implement a security plan that protects the confidentiality of their customers’ personal data.

The Pretexting Provisions – Pretexting refers to the use of unsolicited means, in order to gain access to non-public, personal information e.g. impersonating an account holder on the phone to obtain personal details. GLBA requires those governed by the law, to implement adequate provisions to safeguard against Pretexting.

What are the implications of Gramm-Leach-Bliley in terms of file transfer?
In order to comply with GLBA when transferring sensitive data, financial institutions must ensure that they;

  • Prevent the transmission and delivery of files and documents containing non-public personal information to unauthorised recipients.
  • Document delivery and receipt is enforced through enterprise-defined policies.
  • Provide detailed logs and audit trails of content access, authorisation, and users.

Our specialists at Pro2col can help you to source and implement a GLBA compliant, secure file transfer solution to suit your business requirements. Please contact Pro2col on 0333 123 1240 for more information.

H

HTTP File Transfer

HTTP File Transfer (Hypertext File Transfer Protocol) is a set of rules for exchanging files on the World Wide Web. HTTP defines how messages are formatted and sent, as well as the actions web servers and browsers should take in response to commands.

A browser is used to send an HTTP command to a web server and establish a TCP connection (Transmission Control Protocol). The web server then sends HTML pages back to the user’s browser – these are what we would refer to as webpages. For example, when you enter a URL in a web browser, this actually sends a HTTP command to the web server, instructing it to fetch and transmit the requested webpage.

HTTP file transfer can also be used to send files from a web server to a web browser (or to any other requesting application that uses HTTP). It is referred to as a stateless protocol as each command is independent of another. The connection established between the browser and the web server is closed as soon as the web server responds to the initial command. In contrast, FTP is a two-way file transfer protocol. Once a connection is established between a workstation and a file server, files can be transferred back and forth between the two entities.

The default port for HTTP is port 80.

For further information on the HTTP file transfer solutions we provide, please contact us on 0333 123 1240

 

HTTPS File Transfer

HTTPS file transfer describes the combination of HTTP (Hypertext Transfer Protocol) and a secure protocol such as SSL or Transport Layer Security (TLS). It is used to send sensitive data over unsecured networks, for example the Internet.

These individual protocols operate on different levels of the ‘network layer’, derived from the TCP/IP model to create HTTPS. The HTTP protocol operates at the highest level of the TCP/IP model (the application level) and is used to format and send data, whereas SSL works at a slightly lower level (in between the application layer and the transport layer), securing the connection over which data will be sent.

HTTPS file transfer is the primary file transfer protocol used to secure online transactions. By default this protocol uses port 443 as opposed to the standard HTTP port of 80. URL’s beginning with HTTPS, indicate that the connection between client and browser is encrypted using SSL.

For more information regarding the HTTPS file transfer solutions that we provide, contact Pro2col on 0333 123 1240

I

Internal Controls

In file transfer, the term “internal controls” refers to both technology and manual (human-performed) procedures used to mitigate against risk.  Examples of typical internal technology include firewalls, secure file transfer software and standalone encryption packages.  Examples of manual internal control procedures include background checks, “multiple signer” document approval workflows and training to steer people away from risky behaviour (such as “certificate spills“).

 

Internet Protocol Suite

The Internet Protocol Suite is a term used to describe the set of communication protocols, developed individually by the IT community, for sending data over computer networks such as the Internet. TCP (Transmission Control Protocol) and IP (Internet Protocol) were the first two protocols included in the Internet Protocol Suite and are the basis from which the term originated. Sometimes referred to as TCP/IP, The Internet Protocol Suite as a whole consists of a number of internet working protocols that operate in a ‘network layer’. Each of these layers are designed to solve a specific issue affecting the transmission of data. Higher layers are closer to the user and deal with more abstract data, relying on lower layers to convert data into forms that can be physically manipulated for transmission. To elaborate, please refer to table (1a), which breaks down the layers included in TCP/IP suite and explains each layer’s function and possible protocols that can be used to fulfill these functions.

For more information regarding file transfer technologies which make use of the TCP/IP Stack, contact Pro2col on 0333 123 1240

 

 

IPv6

IPv6 is the name of the networking protocol which is rapidly replacing the use of IPv4 in wake of widespread IPv4 exhaustion.  IPv6 is defined in 1998’s RFC 2460.

IPv6 addresses are written in “colon notation” like “fe80:1343:4143:5642:6356:3452:5343:01a4” rather than the “dot notation” used by IPv4 addresses such as ” 11.22.33.44″.  IPv6 DNS entries are handled through “AAAA” entries rather than “A” entries under IPv4.

BEST PRACTICES: All FTP technology should now support an RFC 2428 implementation of IPv6 and the EPSV (and EPRT) commands under both IPv4 and IPv6.  Until IPv4 is entirely retired, the use of technology that supports both IPv4 and IPv6 implementations of FTP is preferred. Avoid using FTP over connections that automatically switch from IPv6 to IPv4 or visa versa.

 

 

ISO 27001

ISO 27001 is an Information Security Management Standard (ISMS), published in October 2005 by the International Organisation for Standardisation (ISO) and the International Electrotechnical Commission (IEC).

Essentially an updated version of the old BS7799-2 standard, it provides a model for establishing, implementing, operating, monitoring, reviewing, maintaining and improving a documented Information Security Management System within an organisation. Taking into consideration a specific organisation’s overall perceived risk, it details requirements for the implementation of security controls, suited to the needs of individual businesses.

Many organisations will have information security controls in place, but what many are lacking (and what ISO 27001 covers) is the need for a management approach to these controls.

ISO 27001 Standards
This standard is an optional certification that provides a structured approach when implementing an Information Management System. If an organisation takes the decision to adopt this standard, the specific requirements stipulated must be followed, as auditing and compliance checks will be made.

The standard requires that management within the organisation must:

  • Systematically assess the organisation’s information security risks, taking account of the threats, vulnerabilities and impacts.
  • Design and implement a coherent and comprehensive suite of information security controls and/or other forms of risk treatment (such as risk avoidance or risk transfer) to address those risks that it deems unacceptable.
  • Adopt an all-encompassing management process to ensure that the information security controls continue to meet the organisation’s information security needs on an ongoing basis.

 

 

L

LAN (Local Area Network)

LAN is the abbreviation used to describe a Local Area Network. The term “Local Area Network” refers to a computer network that covers a small physical area, usually confined to one building or a small group of buildings e.g. a home network or a business network.

A LAN is usually implemented to connect local workstations, servers and devices. This enables documents held on individual computers and servers to be accessed by any workstation within the LAN network. It also allows devices such as printers to be shared between workstations.

As a LAN includes a group of devices within a close proximity, they are usually able to transmit data at a fast rate as compared to Wide Area Networks. They are also relatively inexpensive to set-up as they use hardware such as Ethernet cables, network adaptors and hubs.



Latency

The term latency is an expression for the period of time taken to send a data packet from a source to the intended destination, the higher the latency the slower the data transmission. This incorporates all elements of the file sending process – including encoding, transmission, and decoding.

Certain delivery protocols such as FTP are particularly susceptible to latency. When sending packets of data to the remote site the sending site waits for an acknowledgment that the packet has been received before sending the next one, thus making the problem extremely time consuming in the event of high latency. In extreme cases of latency the time that it takes for the delivery of data and then listening out for the reply can result in the data throughput levels dropping to a significantly low level rendering the solution useless.

There are several ways to combat this, one being to utilise a multi-threaded TCP protocol – working in the same manner as above just that many other packet transfer requests are made increasing the throughput. Another increasingly popular route is to adopt a UDP based delivery protocol which adopts a send and forget mentality i.e. not waiting for the acknowledgement receipt. This can significantly speed up the delivery process but other features are required, as UDP out of the box won’t work for everyone.

Network tools like ping tests and traceroute measure latency by determining the time it takes a given network packet to travel from source to destination and back, the so-called round-trip time. Round-trip time is not the only way to specify latency, but it is the most common. To test the latency on your Internet connection between 100’s of test servers go to Speedtest.net where you can test your bandwidth and latency against a local (london) server against say one in Bangkok. On DSL or cable Internet connections, latencies of less than 100 milliseconds (ms) are typical and less than 25 ms desired. Satellite Internet connections, on the other hand, average 500 ms or higher latency.

If you suffer from latency problems when it comes to file transfer, please contact Pro2col on 0333 123 1240 for more information on how you can combat this problem.



LDAP

LDAP is a type of external authentication that can provide rich details about authenticated users, including email address, group membership and client certificates.

LDAP connection use TCP port 389 but can (and should) be secured with SSL.  When LDAP is secured in this manner, it typically uses TCP port 636 and is often referred to as “LDAPS”.

BEST PRACTICE: Use the SSL secured version of LDAP whenever possible; the information these data streams contain should be treated like passwords in transit.   Store as much information about the user in LDAP as your file transfer technology will permit; this will improve your ability to retain centralised control of that data and allow you to easily switch to different file transfer technology if your needs change.


LDAPS

LDAPS refers to LDAP connections secured with SSL, typically over TCP port 636. See “LDAP” for more information.
 
 

Leased Line

A leased line is a dedicated, communications line set up between 2 end points by a telecommunications specialist. Not physical in nature, leased lines are in reality a reserved circuit and do not have a telephone number, each side of the circuit being permanently connected to the other.

Leased lines can be used for telephone communications, sending data as well as Internet services, and provide a much higher bandwidth than existing lines offered by Internet Service Providers.

The main advantage associated with leased lines is the increased bandwidth they provide. As they are sole, dedicated lines any congestion that occurs in shared lines is eliminated therefore the speed of communication (latency) is greatly increased. These lines are usually purchased by large organisations that regularly use the Internet and wish to obtain a faster, more reliable Internet connection.

If you are experiencing slow file transfer speeds, then contact Pro2col on 0333 123 1240 to find out more about how this issue can be resolved.

M

Managed File Transfer

Managed File Transfer is an industry term used to describe a hardware or software solution that facilitates the movement of large files both inside and outside of the business, whilst maintaining the security and integrity of sensitive data. Although many managed file transfer solutions are built using the FTP file transfer protocol, the phrase was coined to illustrate those solutions that have progressed and developed, to address the disadvantages associated with basic FTP for large file transfer.
 

A solution classed as providing ‘managed file transfer’ should possess value added features such as reporting (e.g. notification of successful file transfers), increased security and greater control of the file transfer process. These solutions enable organisations to automate, self-manage and secure the transfer of files between each other. As companies expand the need to transfer files between locations increases, these features are invaluable to enterprises responsible for sensitive customer data.

The development of managed file transfer solutions has had an enormous positive impact on businesses processes. File transfer accounts for a significant percentage of man-hours in a number of market sectors, spent sending and monitoring transmissions. Managed file transfer eliminates the need for manual processes as these file transfer solutions are designed specifically to do the job for you.

To find out more about managed file transfer and how it can help increase efficiencies within your organisation, please contact Pro2col on 0333 123 1240.

 

Map

In file transfer, a “map” is usually short for “transformation map“, which provides a standardised way to transform one document format into another through the use of pre-defined document definitions. See “transformation map” for more information.
 


Mapper

In file transfer, a “mapper” is a common name for a “transformation engine” that converts documents from one document definition to another through “transformation maps“. See “transformation engine” for more information.
 
 

MD4

MD4 (“Message Digest [algorithm] #4”) is best known as the data integrity check standard (a.k.a. “hash”) that inspired modern hashes such as MD5, SHA-1 and SHA-2.  MD4 codes are 128-bit numbers and are usually represented in hexadecimal format (e.g., “9508bd6aab48eedec9845415bedfd3ce”).

Use of MD4 in modern file transfer applications is quite rare, but MD4 can be found in rsync applications.  A variant of MD4 is also used to tag files in eDonkey/eMule P2P applications.

Although MD4 is considered a “cryptographic quality” integrity check (as specified in RFC 1320), it is not considered a secure hash today because it is possible for an attacker to create bad data that bears the same MD4 code as a set of good data.  For this reason, NIST does not allow the use of MD4 in key U.S. Federal Government applications.

BEST PRACTICE: Modern file transfer deployments should use FIPS-validated SHA-1 or SHA-2 implementations for integrity checks instead of MD4.



MD5

MD5 (“Message Digest [algorithm] #5”) is the most common data integrity check standard (a.k.a. “hash”) used throughout the world today.  MD5 codes are 128-bit numbers and are usually represented in hexadecimal format (e.g., “9508bd6aab48eedec9845415bedfd3ce”).

MD5 was created in 1991 as a replacement for MD4 and its popularity exploded at the same time use of the Internet did as well.  MD5’s use carried over into file transfer software and its use remains common today (e.g., FTP’s unofficial “XMD5” command).

Although MD5 is considered a “cryptographic quality” integrity check (as specified in RFC 1321), it is not considered a secure hash today because it is possible for an attacker to create bad data that bears the same MD5 code as a set of good data.  For this reason, NIST has now banned the use of MD5 in key U.S. Federal Government applications.

BEST PRACTICE: Modern file transfer deployments should use FIPS validated SHA-1 or SHA-2 implementations for integrity checks instead of MD5.  However, FTP software that supports the XMD5 command can be useful to provide backwards compatibility during migration to stronger hashes.



MDN

An MDN (“Message Disposition Notification”) is the method used by the AS1, AS2, and AS3 protocols (the “AS protocols”) to return a strongly authenticated and signed success or failure message back to the senders of the original file.  Technically, MDNs are an optional piece of any AS protocol, but MDNs’ critical role as the provider of the “guaranteed delivery” capability in all of the AS protocols means that MDNs are usually used.

Depending on the protocol used and options selected, the MDN will be returned to the sender in one of the following ways:

Via the same HTTP/S stream used to post the original file: AS2 senders may request that MDNs are sent this way.  This type of transfer is popularly called “AS2 with synchronous MDNs” (or “AS2 sync” for short).  When small files are involved, this type of transfer is the fastest AS protocol transfer currently available.

Via a separate HTTP/S stream back to the sender’s server: AS2 senders may request that MDNs are sent this way.  This type of transfer is popularly called “AS2 with ansynchronous MDNs” (or “AS2 async” for short).  This type of transmission is slightly more resiliant to network hiccups and long processing turnaround times of large files than “AS2 sync” transmissions.

Via email: All AS1 MDNs are returned this way.  AS2 async senders may also request that MDNs are sent this way.

Via FTP: All AS3 MDNs are returned this way.

Full MDNs (the signed responses) are sometimes retained by the sender and/or recipient as irrefutable proof of guaranteed delivery.  The use of X.509 certificates to authenticate and sign both the original file transmission and the MDN receipt often allows MDNs to rise to the level of legally binding nonrepudiation in many jurisdictions.


Message-Oriented Middleware

Message-Oriented Middleware (“MOM”) is software that delivers robust messaging capabilities across heterogeneous operation systems and application environments. Up through the early 2000’s MOM was the backbone of most EAI (“Enterprise Application Integration”) inter-application connectivity. Today, that role largely belongs to to ESB (“Enterprise Service Bus”) infrastructure instead.

 

Metadata

In file transfer, “metadata” usually refers to information about files moved through a file transfer system.  Examples of metadata include usernames of original submitter, content types, paths taken through the system so far and affirmations of antivirus or DLP checks.

Metadata such as suggested next steps is often submitted to file transfer applications in control files. However, most metadata is typically collected during a file’s flow through a file transfer system.  (All the metadata examples above are examples of passively collected metadata.)

File transfer applications often use metadata in their configured workflows to make runtime decisions.  (e.g., A workflow engine may be configured to send files from two different users to two different destinations.)

Metadata is often stored in the status, workflow and log databases used by file transfer applications.   When these data stores are proprietary or inaccessible integrating metadata from multiple applications can be challenging.

Explicit file attributes such as file size, file name, current location on disk and current permissions are not typically considered metadata.  The reason these attributes are not considered metadata is because they are required by almost every operating system; by definition metadata is extra data used to provide additional context for each file.


Microsoft Cluster Server

Microsoft Cluster Server (“MSCS”) is a Microsoft-specific high availability technology that provides a failover capability to pairs of its servers.

Like “web farm”, the term “clustering” is a vendor-neutral term, but every vendor that does clustering does it a little differently, and provides cluster services at different levels (typically at the hardware, OS or application levels).

Microsoft clusters using a specific combination of hardware (e.g,. quorum disk with fiber attachment) and operating system (e.g., Microsoft 2008 Enterprise), and it has decided to name those particular bundles – the only bundles that support clustering – as “Microsoft Cluster Server”.

See also “Web Farm“.

BEST PRACTICES: Explicit support for MSCS is no longer critical for file transfer technology.  Most managed file transfer applications already have application-level clustering support, use web farms or can be failed over in virtual environment using technology like VMware vMotion.


Middleware

Middleware is a software architecture concept that refers to integration of disparate applications to facilitate reliable communication.  Middleware frequently relies on encapsulating inter-application communications in the concept of an “message”, and often has the ability to queue or perform optimized delivery or copying of messages to various applications.

Common types of middleware include EAI (“Enterprise Application Integration”) middleware such as ESB (“Enterprise Service Bus”) or the older MOM (“Message-Oriented Middleware”).

File transfer applications are themselves often used as middleware, helping to facilitate bulk data transfers between applications using standards such as FTP.  Managed file transfers often include the ability to perform some intelligent routing of data and sensitivity to particular transmission windows set by the business.


MOM

In the context of file transfer, MOM stands for “Message-Oriented Middleware“, which is software that delivers robust messaging capabilities across heterogeneous operation systems and application environments.
 
 

MSCS

MSCS is an abbreviation for “Microsoft Cluster Server“, which is a Microsoft-specific high availability technology that provides a failover capability to pairs of its servers.

N

NCUA

The NCUA (“National Credit Union Administration”) is like the FDIC for credit unions.  It provides insurance to credit unions and expects a solid level of operations in return.   It provides regulations and audits member credit unions for fitness.

The NCUA’s official web site is www.ncua.gov.

See also: “FFIEC” (umbrella regulation, including state chartered banks), “FDIC” (federally chartered banks), “the Fed” (U.S. central bank), “OCC” (national and foreign banks) and “OTS” (savings and loans).


Network Layer

The concept of a network layer or ‘layered network’, was developed to account for the rapid changes that occur in technology. This concept allowed for the inclusion of newly developed protocols to work alongside one another to achieve a specified task, for example a secure file transfer.

The Higher layers of a network are closer to the user and deal with more abstract data, relying on lower layers to convert data into forms that can be physically manipulated for transmission.

The separate network layers are designed to perform a specific task, each layer passing information up and down to the next subsequent layer as data is processed.

 

NIST

NIST (“National Institute of Standards and Technology”) is a United States based standards body whose influence on the file transfer industry is felt most heavily through its FIPS 140-2 encryption and hashing standard.  It is also the keeper of many other security standards which must be met if file transfer technology is used in or to connect with the federal government.



Non-Repudiation

Non-repudiation (also “nonrepudiation”) is the ability to prove beyond a shadow of doubt that a specific file, message or transaction was sent at particular time by a particular party from another party.  This proof prevents anyone from “repudiating” the activity: later claiming that the file, message or transaction was not sent, that it was sent at a different time, sent by a different party or received by a different party.  (“Repudiate” essentially means “reject”.)

Non-repudiation is important for legal situations where fraud through fake transactions could occur, such as a string of bad ATM transactions.  However, it is also an important assumption behind most day-to-day processing: once a request occurs and is processed by an internal system, it’s often difficult and expensive to reverse.

The technology behind non-repudiation is often built on:
– Strong authentication, such as that performed with X.509 certificates, cryptographic keys or tokens.
– Cryptographic-quality hashes, such as SHA256, that ensure each file’s contents bear their own unique fingerprint.  (The fingerprints are stored, even if the data isn’t.)
– Tamper-evident logs that retain date, access and other information about each file sent through the system.

Some file transfer protocols, notably the AS1, AS2 and AS3 protocols (when MDNs are in use), have non-repudiation capabilities built into the protocols themselves.  Other protocols depend on proprietary protocol extensions (common in FTP/S and HTTP/S) or higher-level workflows (e.g., an exchange of PGP-encrypted metadata) to accomplish non-repudiation.

O

OCC

The OCC (“Office of the Comptroller of the Currency”) is an independent bureau of the United States Treasury Department.  It charters, regulates and supervises all national banks. It also supervises the federal branches and agencies of foreign banks.  In its regulatory role, it is similar to the FDIC.

The OCC’s official web site is www.occ.treas.gov.

See also: “FFIEC” (umbrella regulation, including state chartered banks), “FDIC” (federally chartered banks), “the Fed” (U.S. central bank), “NCUA” (credit unions) and “OTS” (savings and loans).


OLA

OLA is an abbreviation for “Operating Level Agreement“, which is a type of internal agreement between departments that make it possible for file transfer operations to achieve their SLAs (Service Level Agreements). See “Operating Level Agreement” for more information.



Onboard

To onboard a user or onboard a partner is to set up all the necessary user accounts, permissions, workflow definitions and other elements necessary to engage in electronic transfers of information with those users and partners.

Automatic onboarding of users or partners usually involves external authentication technology of some kind.   When that technology involves particularly rich user or partner profiles and allows users and partners to maintain their own information, then the external authentication technology used to onboard users and partners is often called “Community Management” technology.

“On board” and “on-board” are also occasionally used instead of “onboard”, and administrators often use the phrases “onboard a user” and “provision a user” interchangeably.   See “provisioning” for more information.


Operational Level Agreement

An operational level agreement (OLA) is a less stringent form of service level agreement (SLA) typically set up between two departments in the same organisation, especially when an OLA is set up to help support a customer-facing SLA. See “Service Level Agreement” for more information.
 
 

Orchestration

Orchestration is the ability to control operational flows and activities based on business rules, especially in multi-application systems complicated enough to require middleware such as ESB (“Enterprise Service Bus”) or the older MOM (“Message-Oriented Middleware”).

In the context of a file transfer system, orchestration often refers to the ability to apply automation such as triggers, schedules, explicit calls and chained calls to model or solve a business problem.

In the context of SOA (“Service Oriented Architecture”), orchestration typically refers to the ability of programmers to rapidly develop composite applications due to the fact that most available application APIs have been encapsulated and published in reliable directories that programmers’ applications can easily interpret and use.

BEST PRACTICE: Orchestration typically invokes an image of “drag and drop” business application development: a task easy enough for the average business analyst.  Reality often requires more than that: shelling out to scripts, editing raw XML documents by hand and having to clean up “orchestrated code” after an incompatible interface is rolled out are still common issues.


OTS

The OTS (“Office of Thrift Supervision”) is a United States Treasury Department office that oversees “savings and loans”, particularly those involved in real estate mortgages.  The OTS examines each member institution every 12-to-18 months to assess the institution’s safety and soundness.   In that role, it behaves much like the FDIC does with federally chartered banks.

The OTS’s official web site is www.ots.treas.gov.

See also: “FFIEC” (umbrella regulation, including state chartered banks), “FDIC” (federally chartered banks), “the Fed” (U.S. central bank), “NCUA” (credit unions) and “OCC” (national and foreign banks).

P

Package

The term “package” can mean different things in different file transfer situations.

Installation package” – A file that contains all the executables, installation scripts and other data needed to install a particular application.  This file is usually a compressed file and is often a self-extracting compressed file.

Package sent to another person” – Very similar in scope to email’s “message with attachments”.  This is a term that has rapidly gained acceptance (since about 2008) to describe what gets sent in “person-to-person” transmission.  A package may contain zero or more files and a plain or richly formatted text message as its payload.  A package will also always contain system-specific metadata such as sender/receiver identity, access control attributes, time-to-live information and quality of service information.

“Installation package” is the earlier context of the term “package” ; if you’re dealing with server teams or transmission specialists who deal primarily with system-to-system transfers then “installation package” is probably what they mean when they say “package”.

“Package sent to another person” has evolved as file transfer vendors gradually align the terminology of person-to-person file transfers with physical parcel transfers like those done by UPS or FedEx.   In physical parcel transfers, individual packages may contain a variety of things but each is specifically addressed, guaranteed to be delivered safely and intact and each has its own level of service (e.g., 2nd day vs. overnight).   The term “packages” is similarly used with many person-to-person file transfer solutions to help non-technical people understand the concept in a different context.


Packets

In the world of IT, packet or packets is the term used to describe a unit of data, such as bytes or characters. When sending data over a network, messages or files are broken down into manageable packets before transmission. These packets can also be referred to as a datagram, a segment, a block, a cell or a frame, depending on the protocol used to break it down. Once they have been transmitted, the packets are re-assembled at the receiving end to create the original data file.

The structure of a packet can vary depending on which protocol is used to format it. Typically a packet consists of a header and a payload. The header carries information regarding the reassembly of the packets e.g. where they came from, where they are going to and in what order. The payload refers to the data that it carries.

 

Payment Card Industry Data Security Standard (PCI DSS)

The PCI Security Standards Council is an open global forum and was formed in 2006 – the 5 founding global payment brands include:

American Express, Discover Financial Services, JCB International, MasterCard Worldwide and Visa Inc.

A Global Security Standard, PCI DSS comprises of 12 comprehensive requirements designed to enhance the security of cardholder data. The most poignant of these requirements in terms of large file transfer are:

  • Requirement 3: Protect stored cardholder data.
  • Requirement 4: Encrypt transmission of cardholder data across open, public networks.
  • Requirement 6: Develop and maintain secure systems and applications.
  • Requirement 9: Restrict physical access to cardholder data.
  • Requirement 10: Track and monitor all access to network resources and cardholder data.

Companies that do not comply with PCI DSS are liable to incur operational and financial consequences enforced by the individual payment brands.

Alternatively, if you’d like to find out more about the secure file transfer solutions in our portfolio that will help you to achieve PCI compliance, please contact Pro2col on 0333 123 1240.


PCI

PCI stands for “Payment Card Industry”.  In file transfer, “PCI compliance” frequently refers to a deployed system’s ability to adhere to the standard outlined in “PCI DSS” – a security regulation for the credit card industry.  “PCI certification” is achieved when a PCI compliant system is audited by a PCI Council-approved firm and that third-party firm agrees that it is in compliance.
 


PCI Council

The “PCI Council” is a short name for “PCI Security Standards Council”, the vendor-independent consortium behind PCI (“Payment Card Industry”) standards.
 
 

PCI DSS

PCI DSS is an information security standard introduced by The PCI Security Standards Council, an open global forum and was formed in 2006 – the 5 founding global payment brands include:

American Express, Discover Financial Services, JCB International, MasterCard Worldwide and Visa Inc.

A Global Security Standard, PCI DSS comprises of 12 comprehensive requirements designed to enhance the security of cardholder data.  The most poignant of these requirements in terms of large file transfer are:

  • Requirement 3: Protect stored cardholder data.
  • Requirement 4: Encrypt transmission of cardholder data across open, public networks.
  • Requirement 6: Develop and maintain secure systems and applications.
  • Requirement 9: Restrict physical access to cardholder data.
  • Requirement 10: Track and monitor all access to network resources and cardholder data.

Companies that do not comply with PCI DSS are liable to incur operational and financial consequences enforced by the individual payment brands.  To find out more about how to become PCI Compliant, please click here.

Alternatively, if you’d like to find out more about the secure file transfer solutions in our portfolio that will help you to achieve PCI compliance, please contact Pro2col on 0333 123 1240.

 

PCI Security Standards Council

The PCI Security Standards Council is the vendor-independent consortium behind the PCI (“Payment Card Industry”) standards.

 

Personal Data

Personal data means any data that makes a living person identifiable. This could be ‘direct’, such as their name, or ‘indirect’. This is where combined information could identify the person. GDPR refers to special categories or sensitive data. This includes information about racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, details of health, sex life or sexual orientation, email addresses and IP addresses. It also includes genetic data or biometric data for the purpose of identifying someone.
 
 

PeSIT protocol

PeSIT is an open file transfer protocol often associated with Axway. It was originally developed before the availability of the Internet as we know it today, to connect mainframe computers via X25. ISDN modems or TCP/IP based WAN’s.

Like Sterling Commerce’s proprietary NDM file transfer protocol, PeSIT has now been written into the standard communication specifications of several industry consortiums and government exchanges, thus ensuring a high degree of long-term dependence on Axway technology. PeSIT is required far more often in Europe than in the United States due to Axway’s French roots and home turf advantage.

Also like NDM, PeSIT is normally used in internal (or trusted WAN) transfers; other protocols are typically used for transfers across the Internet.

PeSIT stands for “Protocol d’Echanges pour un Systeme Interbancaire de Telecompensation” and was designed to be a specialized replacement for FTP to support European interbank transactions in the mid-1980’s.  One of the primary features of the protocol was “checkpoint restart”.

BEST PRACTICE: PeSIT, NDM and other protocols should be avoided in new deployments unless specifically mandated by industry rule or statute.   The interesting capabilities these protocols offer (e.g., checkpoint restart, integrity checks, etc.) are now also available in vendor-neutral protocols, and from vendors who allow wider and less expensive licensing of connectivity technology than Axway and Sterling Commerce (now IBM).

FACTS: The PeSIT standard was initially written in 1989 and has not been updated since. At the time it was an advanced protocol, however in the last thirty plus years other more modern protocols have followed suit replicating specific features, whilst also keeping pace with modern encryption standards such as TLS 1.2/1.3. Further information about PeSIT can be found at https://pesit.org/ and you can view the 1989 PeSIT protocol specification here.

PRODUCTS: Due to the complex nature of PeSIT configuration and the limited number available experts in the protocol, it is not widely supported. Axway continue to support PeSIT, whereas other vendors provide PeSIT support in order to be interoperable with Axway’s software. Products supporting the CFT protocol include:

PGP

PGP (“Pretty Good Privacy”) is an encryption program that provides cryptographic privacy and authentication for data communication. PGP is used for signing, encrypting, and decrypting texts, e-mails, files, directories, and whole disk partitions and to increase the security of e-mail communications.
 
 


Provisioning

Provisioning is the act of adding access to and allocating resources to end users and their file transfer workflows.  It is often used interchangeably with the term “onboarding“.

The act of provisioning should always be audited, and the audit information should include the identity of the person who authorized the act and any technical actions the system took to provision the user.

Most file transfer servers today allow administrators to chain up to Active Directory (AD), LDAP or RADIUS or other external authentication to allow centralized management (and thus provisioning) of authentication and access.  However, provisioning of customer-specific workflows is often a manual procedure unless standard workflows are associated with provisioning groups.

Automated provisioning of users through import capabilities, APIs and/or web services is a competitive differentiator across different file transfer servers, and varies widely from “just establish credentials”, through “also configure access” and on to “also configure workflows”.

Use of external authentication usually makes migration from one file transfer technology to another much easier than when proprietary credential databases are in use.  When external authentication is in use, end users usually do not need to reset their current passwords.  However,when proprietary credential databases from two different vendors (or sometimes two different products from the same vendor) are involved, it is common that every end user will have to change his or her password during migration.

BEST PRACTICE: Whenever possible, implementers of file transfer technology should use an external authentication source to control access and privileges of end users.  When an external authentication source is used to control authentication in this manner, provisioning on the file transfer server can occur at any moment after the user is created or enabled on the central authentication server.

Q

QOS

QOS stands for “Quality Of Service”. See “Quality of Service” for more information.


Quality of Service

Quality of Service (or “QOS”) is the ability to describe a particular level of service and then intelligently allocate resources to reliably provide that level of service.  A common example of general QOS capabilities is found in the “traffic shaping” features of routers: different types of traffic (e.g., web surfing, videoconferencing, voice, etc.) share a common network but allocations are intelligently made to ensure the proper prioritisation of traffic critical to the business.

In a business context, QOS is closely associated with a partner’s ability to meet it’s Service Level Agreements (SLAs) or an internal department’s ability to meet its Operating Level Agreements (OLAs).

In a technical context, file transfer QOS typically involves one or more of the following factors:

  • Traffic shaping – ensuring that FTP, SSH and other file transfer traffic continues to operate in flooded network conditions.  (Many types of file transfer traffic, including FTP and SSH traffic, are often easy to spot.)
  • Network timeouts and negotiation responses – ensuring that long-running sessions are either allowed or denied or throttling TCP negotiations (either speeding up to ensure the initial attempt survives or scaling back to limit the effects of rapid-fire script kiddie attacks).
  • Any of the major components of a file transfer SLA – e.g., availability of file transfer services, round-trip response time for file submissions or completion of particular sets of work.

R

RADIUS

RADIUS is an authentication protocol that supports the use of username, password and sometimes one extra credential number such as a hardware token PIN.

In file transfer applications, RADIUS sign on information can be collected by web-based, FTP-based or other file transfer prompts and then tried against trusted RADIUS servers.  When a file transfer application gets a positive acknowledgement from a RADIUS server, it will typically need to look up additional information about the authenticated user from its internal user database or other external authentication sources (frequently LDAP servers such as Active Directory).

 

RFI

RFI stands for “Request for Information” and is used to ask which products and services are available to meet your file transfer needs and to get information about the firms behind the offerings.   The utility of RFIs in the acquisition of technology declined significantly with the rise of the world wide web, as much of the information typically requested in an RFI is freely available on vendor web sites.

While an RFI is not technically an invitation to bid, many companies nonetheless use them as such.    The correct instrument to use to solicit bids is the RFP (“Request for Proposal”).    The correct role of a file transfer RFI process is to determine which, if any, file transfer vendors could potentially serve the needs of a file transfer project or strategic file transfer consolidation.

BEST PRACTICE: Send file transfer RFPs to potential vendors instead of file transfer RFIs unless corporate/government policy forces you to send RFIs.   Use of the RFP format will allow you to get a better answer back faster from potential vendors by forcing you to describe your specific challenges upfront and signalling to vendors that you probably have the focus and funding to proceed.


RFP

RFP stands for “Request For Proposal” and allows multiple vendors to suggest a specific solution to your specific challenges in a well-documented and repeatable format.

Good responses to a file transfer RFP will answer your questions about:

  • Vendors’ ability to execute (e.g., experience, expertise)
  • Vendors’ position in the industry (e.g., innovator, total solution, value-priced)
  • Products’ ability to meet needs off the shelf, degree of product customisation and/or use of professional services or third-party technology to complete solution
  • Vendor’s ongoing commitment to products used to provide the solution
  • Time to deploy solution, with specific milestones if the project is longer than three months
  • Cost to deploy solution, including internal costs
  • Risk of deploying solution (e.g., backout considerations, integration milestones, etc.)

BEST PRACTICE: Use of the RFP format will allow you to get a good answer back fast from potential file transfer vendors by forcing you to describe your specific challenges up front and signalling to vendors that you probably have the focus and funding to proceed.


Right to Erasure

Under GDPR, the data subject has the right to request erasure of personal data.

S

Sarbanes Oxley (SOX)

The Sarbanes Oxley Act is a US federal law, enacted on 30th July 2002, governing financial reporting and accountability processes within public companies. The legislation was brought into force as a safeguard, following a succession of corporate accounting scandals, involving a number of high profile organisations. These companies purposefully manipulated financial statements, costing investors billions of dollars.

Sarbanes Oxley (SOX) contains 11 titles, detailing specific actions and requirements that must be adopted for financial reporting, ranging from corporate board responsibilities to criminal penalties incurred as a consequence of non-compliance. The most significant of these titles in terms of data transfer is section 404.

Sarbanes Oxley Standards
Section 404 states companies governed by SOX are required to:

  • Publish information in their annual reports, stating the responsibility of management for establishing and maintaining an adequate internal control structure and procedures for financial reporting, detailing the scope and adequacy.
  • Include an assessment of the effectiveness of internal controls.

What are the implications of SOX in terms of file transfer?
In order to provide this information and ensure compliance with US law, public accounting companies must implement large file transfer processes that ensure:

  • The accurate recording of all financial data, including auditing logs.
  • Regulate access to and modification of all financial data by unauthorised users.
  • Track activity of data as it crosses application and organisational barriers.

Our specialists at Pro2col can help you to source and implement a SOX compliant secure file transfer solution to suit your business requirements. Please contact us on 0333 123 1240 for more information.


Secure File Transfer

Security is of paramount importance in today’s corporate environments, due to the sensitive nature of the information that they hold. Industry standards such as PCI DSS, Sarbanes Oxley and HIPAA dictate an organisation’s responsibility to secure such information and as such, the need for secure file transfer solutions has become a priority.

A number of secure file transfer protocols have been developed over the years as a solution to the issue of data security. As there are several different ways of sending and receiving files e.g. HTTP (via a web browser), FTP (client to server) and Email, there are a variety of security protocols used to secure communication channels. The key secure file transfer protocols include:

  • FTPS (FTP run over SSL/TLS)
  • HTTPS
  • SFTP (SSH – Secure Shell protocol)

The FTPS and HTTPS file transfer protocols send files over a TCP connection. This TCP connection has a TLS(transport layer security) or SSL (secure socket layer) security layer that runs beneath the FTP and HTTP protocols. SSL simplified is a protocol that establishes an agreement between a client/browser and a server. The sending side uses a public key to encrypt a data file, which is then sent using either the FTP or HTTP file transfer protocol. The receiving side will have, as part of the agreement, a unique private key that can decrypt the data file.

SFTP is not FTP run over SSH, but rather a stand alone secure file transfer protocol designed from the ground up. The file transfer protocol itself does not provide authentication and security; it expects the underlying protocol (SSH) to secure this. SSH is used to establish a connection between a client and server that acts like an encrypted tunnel. This encrypted tunnel protects any data files that are sent via this secure connection.

If you want to find out more information about secure file transfer solutions, please contact Pro2col on 0333 123 1240.

 

Self-Provisioning

Self-provisioning is the ability for individual end users and partners to set up (or “provision“) their own accounts.

Self-provisioning is a common element of most cloud services but remains relatively rare in file transfer applications.  A major difference between those environments is that self-provisioning in cloud services usually involves linking a credit card or other form of payment to each provisioned account.  This gives cloud services two important things that encourage the use of self-provisioning: a third-party validation of a user’s identity and an open account to bill if things go astray.  File transfer environments also involve a lot of trusted links and require, either by law or regulation, human intervention before such a link is authorised.

BEST PRACTICE: Self-provisioning may or may not be right for your environment.  As is the case with many types of automation, evaluation of this technology in a file transfer environment should involve a cost-benefit analysis of manually provisioning and maintaining groups of users vs. building a self-provisioning application that meets your organisation’s standards for establishing identity and access.    An common alternative that lies between manual provisioning and self-provisioning is the ability to delegate permission to provision a partner’s user to a particular partner’s administrator.  (File transfer Community Management often involves delegating provisioning privileges this way.)


SEPA

The Single Euro Payments Area (SEPA) is an EU initiative to unify payments within the EU. It is primarily driven by the European Payments Council. (SEPA is not, by itself, a standard.)
 
 

Service Level Agreement

A file transfer service level agreement (SLA) establishes exactly what a particular customer should expect from a particular file transfer provider, and how that customer should seek relief for grievances.

A file transfer SLA will often contain the following kinds of service expectations:

Availability: This expresses how often the file transfer service is expected to be online.  An availability SLA is often expressed as a percentage with a window of downtime.  For example: “99.9% uptime except for scheduled downtime between 2:00am and 5:00am on the second Sunday of the month.”

Different availability SLAs may be in effect for different services or different customers. Availability SLAs are not unique to file transfer; most Internet-based services contain an availability SLA of some kind.

Round-Trip Response Time: This expresses how fast a complete response to a submitted file will be returned.  A round-trip response time SLA is often expressed as a certain length of time.  For example, “we promise to return a complete response for all files submitted within 20 minutes of a completed upload”.  Sometimes a statistical percentage is also included, as in “on average, 90% of all files will receive a completed response within 5 minutes.”

The reference to “round-trip” response time rather than just “response time” indicates that the time counted against the SLA is the total time it takes for a customer to upload a file, for that file to be consumed and processed internally, and for any response files to be written and made available to customers.  Simple “response time” could just indicate the amount of time it would take the system to acknowledge (but not process) the original upload.

Different round-trip response time SLAs may be in place for different customers, types of files or times of day. Round-trip response time SLAs are similar to agreements found in physical logistics: “if you place an order by this time the shipment will arrive by that time.”

Completed Body of Work: This expresses that a particular set of expected files will arrive in a particular window and will be appropriately handled, perhaps yielding a second set of files, within the same or extended window.  For example, “we expect 3 data files and 1 control file between 4pm and 8pm everyday, and we expect 2 response files back at any time in that window but no later than 9pm”

Files in a body of work can be specified by name, path, size, contents or other pieces of metadata.  There are typically two windows of time (“transmission windows“) associated with a body of work: the original submission window and a slightly larger window for responses.

SLAs can be set up between independent partners or between departments or divisions within an organisation.  A less stringent form of SLA known as an operating level agreement (OLA) when it is between two departments in the same organisation, especially when an OLA is set up to help support a customer-facing SLA.

BEST PRACTICE: A good file transfer SLA will contain expectations around availability and either round-trip response times or expected work to be performed in a certain transfer window, as well as specific penalties for failing to meet expectations.  File transfer vendors should provide adequate tools to monitor SLAs and allow people who use their solutions to detect SLA problems in advance and to compensate customers appropriately if SLAs are missed.


SFTP File Transfer

SFTP file transfer or the ‘SSH file transfer protocol’ as it is more formally known, is a network communications protocol used for sending data securely over a network. A common misconception associated with SFTP is that it uses FTP run over SSH – this is not the case. SFTP, sometimes referred to as ‘secure file transfer protocol’, is an independent protocol that has been developed from scratch.

Built to look and feel like FTP due to its popularity, SFTP was developed as a secure alternative to FTP. Based on the Secure Shell protocol, SFTP encrypts both the data before transmission and the commands (using SSH key-exchange algorithms) between servers. This provides dual protection against eavesdroppers and hackers, ensuring any data sent using this SFTP remains totally secure.

Not only a dedicated file transfer protocol, SFTP enables permission and attribute manipulation, file locking and more functionality. Saying this, a draw back associated with the use of SFTP is the limited number of client applications available that are actually compatible with SFTP servers.

The default port for SFTP is port 22.

If you would like to know what Pro2col can do for you in terms of secure file transfer, please contact us on 0333 123 1240.


SHA-1

SHA-1 (“Secure Hash Algorithm #1”, also “SHA1”) is the second most common data integrity check standard (a.k.a. “hash”) used throughout the world today.  SHA-1 codes are 160-bit numbers and are usually represented in hexadecimal format (e.g., “de9f2c7f d25e1b3a fad3e85a 0bd17d9b 100db4b3”).

SHA-1 is the least secure hash algorithm NIST currently supports in its FIPS validated cryptography implementations.   However, SHA-1 is faster than SHA-2 (its successor) and is commonly available in file transfer technology today (e.g., FTP’s unofficial “XSHA1” command).

BEST PRACTICE: Modern file transfer deployments should use FIPS-validated SHA-1 or SHA-2 implementations for integrity checks.  Some SHA-1 usage is already prohibited in high-security environments.

 

SHA-2

SHA-2 (“Secure Hash Algorithm #2”) is the most secure hash algorithm NIST currently supports in its FIPS validated cryptography implementations.  SHA-2 is really a collection of four hashes (SHA-224, SHA-256, SHA-384 and SHA-512), all of which are stronger than SHA-1.

Complete SHA-2 implementations in file transfer are still uncommon but becoming more common as time passes.

A “SHA-3” contest to find a future replacement for SHA-2 is currently underway and expected to conclude in 2012.

BEST PRACTICE: Modern file transfer deployments should use FIPS validated SHA-1 or SHA-2 implementations for integrity checks.



SHA-224

SHA-224 is the 224-bit component of the “SHA-2” data integrity check standard (a.k.a. “hash”). It is not a unique hash algorithm within the SHA-2 standard but is instead a truncated version of SHA-256. See “SHA-2” for more information.
 
 

SHA-256

SHA-256 is the 256-bit component of the “SHA-2” data integrity check standard (a.k.a. “hash”). Like SHA-512, it is one of two unique algorithms that make up a SHA-2 hash, but SHA-256 is optimized for 32-bit calculations rather than 64-bit calculations. See “SHA-2” for more information.

 

SHA-3

SHA-3 refers to the new hash algorithm NIST will choose to someday replace SHA-2. A contest to select the new hash is scheduled to conclude in 2012.
 


SHA-384

SHA-384 is the 384-bit component of the “SHA-2” data integrity check standard (a.k.a. “hash”). It is not a unique hash algorithm within the SHA-2 standard but is instead a truncated version of SHA-512. See “SHA-2” for more information.
 


SHA-512

SHA-512 is the 512-bit component of the “SHA-2” data integrity check standard (a.k.a. “hash”). Like SHA-256, it is one of two unique algorithms that make up a SHA-2 hash, but SHA-512 is optimized for 64-bit calculations rather than 32-bit calculations. See “SHA-2” for more information.
 


SLA

SLA is an abbreviation for “Service Level Agreement“, which is a specific contract between a customer and a provider that lays out exactly what each side can expect from the other.   The minimum amount of work and minimum level of due care that a file transfer operations team is responsible for is often determined by the SLAs it must meet.

See “Service Level Agreement” for more information.


SMTP

SMTP is an email protocol used to push messages and attachments from server to server.  Many technologies have been used to secure SMTP over the years, but the best technologies available today use SSL (version 3) or TLS to secure the entire SMTP connection.

SMTP typically uses TCP port 25 to move unsecured traffic and often uses TCP port 465 to move secured traffic.  Use of alternate ports with SMTP is extremely common to reduce connections from spammers who try to exploit servers listening on the most common ports.

BEST PRACTICES: SMTP should always be secured using SSL (version 3) or TLS.  If your file transfer deployment uses email notifications, then email sent through SMTP should either always be accepted within a few seconds or should be automatically queued up in the file transfer application in case of delay.  If SMTP services are unreliable and no queue mechanism exists in your file transfer solution, then a standalone mail server relay should be implemented to ensure that timed-out notifications do not cause unnecessary delay or failure of file transfer activities.


SSH File Transfer

SSH (Secure Shell) is a network protocol used to establish a secure connection between a client and server. Once a connection has been established, it acts like an encrypted tunnel down which data can be exchanged securely. SSH file transfer is used to maintain the confidentiality and integrity of data communications over insecure networks such as the Internet.

SSH file transfer also provides password authentication using public-key cryptography. This form of cryptography works on a Private Key, Public Key system – the sending side has one of each, as does the receiving side of the transfer. Messages are encrypted with the recipient’s public key and can only be decrypted with the corresponding private key.

Originally developed as a replacement for Telnet and other insecure remote shells SSH is used principally on Linux and Unix based systems to access shell accounts.

The default port for SSH is port 22.

 

SSL

SSL (“Secure Sockets Layer”) was the first widely-deployed technology used to secure TCP sockets.  Its use in HTTPS (HTTP over SSL) allowed the modern age of “ecommerce” to take off on the world wide web and it has also been incorporated into common file transfer protocols such as FTPS (FTP over SSL) and AS2.

In modern usage, “protected by SSL”, “secured by SSL” or “encrypted by SSL” really means “protected by TLS or SSL version 3, whichever gets negotiated first.”  By 2010 clients and servers knew how to negotiate TLS sessions and will attempt to do so before trying SSL version 3 (an older protocol).  Version 2 of SSL is considered insecure at this point and some clients and servers will attempt to negotiate this protocol if attempts to negotiate TLS or SSL version 3 fail, but it is rare that negotiation falls through to this level.

SSL depends on the use of X.509 certificates to authenticate the identity of a particular server.  The X.509 certificate deployed on an SSL/TLS server is popularly referred to as the “server certificate”.  If the name (CN) on the server certificate does not match the DNS name of a server, clients will refuse to complete the SSL/TLS connection unless the end user elects to ignore the certificate name mismatch.  (This is a common option on web browsers and FTP clients.)  If the server certificate is expired, clients will almost always refuse to complete the SSL/TLS connection.  (Ignoring certificate expiration is usually not an option available to end users.)  Finally, if the server certificate is “self signed” or is signed by a CA that the client does not trust, then most clients will refuse the connection.  (Browsers and FTP clients usually have the option to either ignore the CA chain or import and trust the CA chain to complete the negotiation.)

Optional X.509 client certificates may also be used to authenticate the identity of a particular user or device.  When used, this certificate is simply referred to as a “client certificate.”  File transfer servers either manually map individual client certificates to user codes or use LDAP or Active Directory mappings to accomplish the same goal.  File transfer servers rarely have the ability to ignore expired certificates, often have the ability to import a third-party CA certificate chain, and often have the ability to permit “self-signed” certificates.

Whether or not client certificates are in use, SSL/TLS provides point-to-point encryption and tamper protection during transmission.  As such, SSL/TLS provides sufficient protection of “data in motion”, though it provides no protection to “data at rest.”

SSL/TLS connections are set up through a formal “handshake” procedure.  First, a connecting client presents a list of supported encryption algorithms and hashes to a server.  The server picks a set of these (the combination of algorithms and hashes is called a “ciphersuite”) and sends the public piece of its X.509 server certificate to the client so the client can authenticate the identity of the server.  The client then either sends the public piece of its X.509 client certificate to the server to authenticate the identity of the client or mocks up and sends temporary, session-specific information of a similar bent to the client.  In either case, key exchange now occurs.

When you hear numbers like “1024-bit”, “2048-bit” or even “4096-bit” in relation to SSL, these numbers are referring to the bit length of the keys in the X.509 certificates used to negotiate the SSL/TLS session.  These numbers are large because the exchange of keys in asymmetric public-key cryptography requires the use of very large keys, and not every 1024-bit (etc.) number is a viable key.

When you hear numbers like “80-bit”, “128-bit”, “168-bit” or even “256-bit” in relation to SSL, these numbers are referring to the bit length of the shared encryption key that is negotiated during the handshake procedure.  This number is dependent on the algorithm available on clients and servers.  For example, the AES algorithm comes in 128-bit, 192-bit and 256-bit editions, so these are all possible values for software that supports AES.

It is uncommon to refer to hash bit lengths in conversations about SSL; instead the named hash – typically MD5 or SHA-1 – is typically stated instead.

The three most important implementations of TLS/SSL today are Microsoft’s Cryptographic Providers, Oracle’s Java Secure Socket Extension (“JSSE”) and OpenSSL.  All of these libraries have been separately FIPS validated and all may be incorporated into file transfer software at no charge to the developer.

BEST PRACTICE: The SSL/TLS implementation in your file transfer clients and servers should be 100% FIPS validated today. More specifically, the following file transfer protocols should be using FIPS validated SSL/TLS code today: HTTPS, FTPS, AS1, AS2, AS3 and email protocols secured by SSL/TLS.  Modern file transfer software supports the optional use of client certificates on HTTPS, FTPS, AS1, AS2 and AS3 and allows administrators to deny the use of SSL version 2.  If you plan to use a lot of client certificates to provide strong authentication capabilities, research the level of integration between your file transfer software, your enterprise authentication technology (e.g., Active Directory) and your PKI infrastructure.

 

Subject Access Requests (SARs)

Under GDPR, the data subject has the right to request all personal data a data controller has on them. This includes their supply chain.
 


SWIFT

The Society for Worldwide Interbank Financial Telecommunication (SWIFT) runs a popular system used by banks around the world to quickly exchange transactions with each other.  Most international interbank messages use this system.  Unlike clearing houses or other institutions that provide intermediate or final settlement of financial transactions, SWIFT is simply a secure transaction service.  Remote banks may use the system to directly transact with one another, but SWIFT is not itself responsible in any activity that may occur within any bank-to-bank transaction.

SWIFT provides limited file transfer services.  For example, SWIFTNet Mail can be used to intercept messages bearing attachments and route them to other SWIFT members through the SWIFT network rather than the open network.   Internally, SWIFT has standardized on XML-wrapped transactions in its store-and-forward architecture.

Though an early advocate of near-realtime geographic redundancy across continents (North America and Europe), SWIFT pulled all operations back into the EU in 2009 after a 2006 ruling in which the Belgium government declared that SWIFT cooperation with U.S. Federal authorities were a breach of Belgian and EU privacy laws.   (Today, cloud service providers often avoid spanning geopolitical boundaries because of this and similar landmark rulings.)

T

The Health Insurance Portability and Accountability Act (HIPAA)

HIPAA is the abbreviation of ‘The Health Insurance Portability and Accountability Act’. It is a US federal law governing the protection and privacy of sensitive, patient health care information. Proposed in 1996 by Congress, HIPAA was finally brought into enforcement by the Department of Health and Human Services (HHS) in 2001.

The objective of HIPAA is to encourage the development of an effective health information system. Likewise, the standards introduced must strike a balance between efficiently transmitting health care data to ensure quality patient care, whilst enforcing all necessary measures to secure personal data. This goal was achieved by establishing a set of standards relating to the movement and disclosure of private health care information.

HIPAA incorporates administrative simplification provisions, designed to help with the implementation of national standards. As such, HIPAA is broken down into 5 core rules and standards. The HHS assigned government bodies, such as the OCR (Office for Civil Rights) and CMS (Centers for Medicare & Medicaid Services) to organise and enforce these rules and standards. The OCR was assigned to administer and enforce the Privacy Rule and more recently, the Security Rule. CMS implements and governs electronic data exchange (EDI) including Transactions and Code Set standards, Employer Identification Standards and the National Identifier Standard.

HIPAA Rules and Standards


Privacy rule: Addresses the appropriate safeguards required to protect the privacy of personal health information. It assigns limits and conditions concerning the use and disclosure of personal information held by healthcare organisations or any other businesses affiliated with these organisations.

Security Rule: The Security Rule complements the Privacy Rule but focuses specifically on Electronic Protected Health Information (EPHI). It defines three processes where security safeguards must be implemented to ensure compliance: administrative, physical, and technical.

Transactions and Code Set Standards: In this instance, the term transactions, refers to electronic exchanges involving the transfer of information between two parties. HIPAA requires the implementation of standard transactions for Electronic Data Interchange (EDI) of health care data. HIPAA also adopted specific code sets for diagnosis and procedures to be used in all transactions.

Employer Identification Standards: HIPPA requires that employers have standard national numbers that identify them on all transactions – The Employer Identification Number (EIN)).

National Identification Standards: All healthcare organisations that qualify under HIPAA legislation, using electronic communications must use a single identification number (NPI) on all transactions.

What are the implications of HIPAA in terms of file transfer?
To ensure compliance with HIPAA in terms of large file transfer, Healthcare organisations must:

  • Protect the privacy of all individually identifiable health information that is stored or transmitted electronically.
  • Limit disclosures of protected health information whilst still ensuring efficient, quality patient care.
  • Enforce stringent requirements for access to records.
  • Implement policies, procedures and technical measures to protect networks, computers and other electronic devices from unauthorised access.
  • Effectuate business associate agreements with business partners that safeguard their use and disclosure of PHI.
  • Update business systems and technology to ensure they provide adequate protection of patient data.

Our specialists at Pro2col can help you to source and implement a HIPAA compliant, secure file transfer solution to suit your business requirements. Please contact us on 0333 123 1240 for more information.


TLS

TLS (“Transport Layer Security”) is the modern version of SSL and is used to secure TCP sockets.  TLS is specified in RFC 2246 (version 1.0), RFC 4346 (version 1.1) and RFC 5246 (version 1.2).  When people talk about connections “secured with SSL”, today TLS is the technology that’s really used instead of older editions of SSL.

 

BEST PRACTICE: All modern file transfer clients and file transfer servers should support TLS 1.0 today.  Most clients and servers support TLS 1.1 today, but TLS 1.1 support will probably not be required unless major issues appear in TLS 1.0.  Some clients and servers support TLS 1.2 today but it’s a trivial concern at this point.  All file transfer software should use FIPS validated cryptography to provide TLS services across file transfer protocols such as HTTPS, FTPS, AS1, AS2, AS3 or email protocols secured with TLS.


Transformation Engine

A translation engine is software that performs the work defined in individual transformation maps.

The transformation engines that power transformation maps are typically defined as “single-pass” or “multiple-pass” engines.  Single-pass engines are faster than multiple-pass engines because documents are directly translated from source formats to destination formats, but single-pass engines often require more manual setup and are harder to integrate and extend than multiple-pass engines.  Multiple-pass engines use an intermediate format (usually XML) between the source and destination formats; this makes them slower than single-pass engines but often eases interoperability between systems.

BEST PRACTICE: Your decision to use a single- or multiple-pass map transformation engine should be predicated first on performance, then on interoperability.  (It won’t matter how interoperable your deployment is if it can’t keep up with your traffic.)    However, the ever-increasing speed of computers and more common use of parallel, coordinated systems is gradually tilting the file transfer industry in favor of multiple-pass transformation engines.

 

Transformation Map

A transformation map (or just “map”) provides a standardised way to transform one document format into another through the use of pre-defined document definitions.

A single transformation map typically encompasses separate source and destination document definitions, a field-by-field “mapping” from the source document to the destination, and metadata such as the name of the map, what collection it belongs to and which people and workflows can access it.

It is common to “develop” new maps and document formats to cope with document formats unique to a specific organisation, trading arrangement or industry.  (The term “development” is still typically used with maps in the file transfer industry, even though most mapping interfaces are now 99%+ drag-and-drop.)

BEST PRACTICE: Most transformation engines (especially those tuned for a particular industry) now come with extensive pre-defined libraries of common document formats and maps to translate between them.   Before investing in custom map development, research available off-the-shelf options thoroughly.


Translation Engine

In file transfer, a “translation engine” is a common name for a “transformation engine” that converts documents from one document definition to another through “transformation maps“. See “transformation engine” for more information.
 


Transmission Control Protocol (TCP)

TCP (Transmission Control Protocol) is one of the two core protocols used in Data Communications, the second core protocol being IP. Part of the Internet Protocol Suite (often referred to as TCP/IP), TCP is a transport layer responsible for higher-level operations. It provides reliable, ordered delivery of data packets between two locations and can also offer management services such as controlling message size, network congestion and rate of exchange.
 


Transmission Window

A transmission window is a window of time in which certain file transfers are expected or allowed to occur.

Transmission windows typically reoccur on a regular basis, such as every day, on all weekdays, on a particular day of the week, or on the first or last day of the month or quarter.

Most transmission windows are contiguous and set on hourly boundaries (e.g., from 3:00pm to 11:00pm) but can also contain breaks (e.g., 3-6pm and 7-9pm) and start/end on minute/second boundaries (e.g., from 3:05:30pm to 7:54:29).

Files received outside of transmission windows are not immediately processed or forwarded by file transfer systems.  Instead, they are typically stored or queued, and are usually processed when the transmission window opens back up.  (e.g., a file received at 7:58am for an 8am-2pm transmission window would be processed today at 8:00am, however a file received at 2:02pm would be processed tomorrow at 8:00am.)

When transmission windows are coupled with specific types of file transfers, service level agreements (SLAs) can be written to lock down expectations and reduce variability.

BEST PRACTICE: When possible, select file transfer scheduling technology that allows you to maintain transmission windows separate from your defined workflows.  For example, if you want to change the transmission window for a particular type of file across 50 customer workflows, make sure you can do so by only changing one window definition, not 50 workflow definitions.   Also look for technology that allows you to see and control the contents of queued files received outside of transmission windows.  Your operators may want to allow, roll over or simply delete some of the files received outside any particular transmission window.

 

Trigger File

A “trigger file” is a common type of control file used to initiate further processing or retransmission.  Trigger files are typically created by the same application that original sends files into a file transfer system.

The two most common types of trigger files are files with similar names to the files that need to be sent and files that contain the names of files that need to be sent.  Examples of two of the most common types of trigger files are shown below.

Similar Name Trigger File Example: Two files named “textrequest_20110608.xml” and “textrequest_20110608.tiff” are sent into a file transfer system.  A third trigger file called “textrequest_20110608.trg” is then uploaded to tell the system to process or send the two “xml” and “tiff” files bearing similar names.

List of Files Trigger File Example: Two files named “textrequest_20110608.xml” and “textrequest_20110608.tiff” are sent into a file transfer system.  A third trigger file called “trigger24235.txt” containing the names of the “xml” and “tiff” files is then uploaded to tell the system to process those specific files.

See “control file” for more information.

 

Triple DES

3DES (also “Triple DES”) is an open encryption standard that offers strong encryption at 112-bit and 168-bit strengths.

3DES is a symmetric encryption algorithm often used today to secure data in motion in both SSH and SSL/TLS.  (After asymmetric key exchange is used perform the handshake in a SSH or SSL/TLS sessions, data is actually transmitted using a symmetric algorithm such as 3DES.)

3DES is also often used today to secure data at rest in SMIME, PGP, AS2, strong Zip encryption and many vendor-specific implementations.  (After asymmetric key exchange is used to unlock a key on data at rest, data is actually read or written using a symmetric algorithm such as 3DES.)

NIST‘s AES competition was held to find a faster and stronger replacement for 3DES.  However, 3DES has not yet been phased out and is expected to remain approved through 2030 for sensitive government information.  (Only the 168-bit version is currently allowed; permitted use of the 112-bit version ceased January 1, 2011.) NIST validates specific implementations of 3DES under FIPS 140-2, and several hundred unique implementations have now been validated under that program.  The 3DES algorithm itself is specified in FIPS 46-3.

See the Wikipedia entry for 3DES if you are interested in the technical mechanics behind 3DES.

BEST PRACTICE: All modern file transfer clients and file transfer servers should support FIPS-valided AES, FIPS-validated 3DES or both.  (AES is faster, may have more longevity and offers higher bit rates; 3DES offers better backwards compatibility.)

 

 

 

V

Validation

Software, systems and processes that are “validated” against a standard are typically better than those merely in “compliance” with a standard.  Validation means that a third-party agency such as NIST or the PCI Council has reviewed and tested the claim of fidelity to a standard and found it to be true.  Validating agencies will usually either publish a public list of all validated implementations or will be happy to confirm any stated claim.

A common example of validation in the file transfer industry is “FIPS validation“.  Under this standard, NIST tests various vendors’ cryptography implementations, issues a validation certificate for each that passes and lists all implementations that have passed in a public web page on the NIST site.

Validation is roughly equivalent to “certification“.


VAN

VAN stands for “Value Added Network”.  A VAN is a data transfer service that uses EDI and/or file transfer protocols to connect to dozens, hundreds or even thousands of businesses.  VANs are often industry-specific; the ones that are will usually connect to almost every major supplier and consumer within that industry (e.g., auto parts).

As a marketing term, VAN is associated with a bygone era.  Most VAN solutions now position themselves as providers of cloud-based EDI and file transfer services (e.g. GXS) or EDI and file transfer technology that completely integrates with modern Community Management services (e.g. IBM’S Sterling Commerce).

However, as a technology concept, VANs are what many “cloud” services hope to be when they grow up: a mature and reliable interconnection of most of the major businesses that serve an industry.


Virtualisation

As a somewhat abstract definition, it is crucial to understand the context in which we are using the word ‘virtual’ before moving onto the definition of virtualisation. The term virtual, in this scenario is defined as “computing not physically existing as such but made by software to appear to do so”.

Virtualisation as a concept represents the ‘virtual’ partitioning or division of a single computing entity and its resources, whether that entity be a server, application, network, storage or operating system. Alternatively, you can interpret the concept from an almost opposing stand point and view it as multiple computing entities being combined to appear as one logical entity through a virtualisation layer. Consequently, there are many different forms of virtualisation; in this instance the focus is server virtualisation.

Originally devised by IBM in the 1960’s to partition large mainframe hardware – in recent years virtualisation technology has been adopted and developed to apply to the now predominant x86 platform. Server virtualisation software enables users to virtualise a piece of hardware, including its components i.e. Hard Disk, RAM and CPU. The functionality from each component can then be assigned as desired to run multiple applications, operating systems or appliances on a single piece of hardware in virtual partitions.

There are multiple advantages associated with virtualisation. The segregation of expensive computer resources increases efficiency by consolidation – reducing the number of physical servers necessary to support a business’s IT solutions. This can save companies large amounts of money on hardware acquisition as well as rack-space, which comes at a premium. Additional advantages include quick deployment, increased security, centralised management, business continuity and disaster recovery, reduced administration and reduction in energy consumption minimising carbon footprint – just to name a few.

Of course as with any IT solution there are also disadvantages accompanying this technology. For example cost of licencing, user complexity, support compatibility, security management issues and deployment dilemmas (e.g. choosing the right solutions to host in a virtual environment as not all are suitable), but with an experienced IT team leader most of these issues become insignificant.

W

Web Farm

A “web farm” is a high availability application architecture that is common to many vendors and products.  It usually involves the use of multiple web (HTTP/S) application servers, each serving the same function, and often relying on the use of round-robin session distribution from a network load balancer (NLB).    However, the term is also often applied to other servers that provide services to the web, notably FTP/S, SFTP and AS2 servers in the context of file transfer.

Web farms are used to provide horizontal scalability at a single location (e.g. adding additional web farm nodes in Dallas to expand capacity from 20K users to 40K users).  They are also usually deployed in a multi-tier architecture, where data actually resides on “back end” database or file servers.

 

Web farms also fill a failover role in the sense that surviving web farm nodes can assume the duties of dead web farm nodes in an emergency.  However, this is only true when the surviving web nodes have enough capacity to serve the remaining system requirements; lose too many web nodes and you lose your failover capability too.

BEST PRACTICES: If high performance is a requirement, managed file transfer solutions with web farm architectures are preferred.  If deployed in Internet-facing servers, highly secure file transfer solutions should allow back-end data systems to reside on a separate “data zone” in a web farm configuration.   Web farms at a single location should be able to be DR recovered at a second location through the use of SAN replication from location to location.

 

 

Wide Area Network (WAN)

A network that spans a wide geographical area is referred to as a WAN (Wide Area Network). A WAN consists of a collection of LAN’s (Local Area Networks) connected by a router that maintains both the LAN information and the WAN information. The WAN side of the router will then connect to a communications link such as a leased line (very expensive dedicated line), existing telephone lines or satellite channels.

This form of networking can facilitate communications between computers on opposite sides of the world and the most commonly known WAN in today’s society is the Internet. The majority of WANs are not owned by a specific company, e.g. the Internet, but rather exist under collective or distributed ownership and management. Saying this, LANs can be built specifically to enable private communication between different buildings within the same organisation that are remotely situated.

 

WS_FTP Home

WS_FTP Home was a commercial file transfer client for Windows desktops.  It was in the market for about five years but was retired in favor of a new edition of WS_FTP LE in 2010.

WS_FTP Home offered a two-panel user interactive user interface and batch scripts that can be scheduled with Windows scheduler.  The protocols were all variants of FTP/S.

WS_FTP Home was a stripped down version of WS_FTP Professional.  The main features missing from the stripped down version were SFTP and scripting/scheduling.

See also “WS_FTP LE” and “WS_FTP Professional”.

 

WS_FTP LE

WS_FTP LE is a free commercial file transfer client for Windows desktops.  The current edition is built on WS_FTP Home’s code base and was reintroduced to the market in 2010.

WS_FTP LE offers a two-panel user interactive interface and its supported protocols are all variants of FTP/S.

WS_FTP LE is a stripped down version of WS_FTP Professional.  The main features missing from the stripped down version are SFTP and scripting/scheduling.

The original version of WS_FTP LE was one of the most popular pieces of freeware ever (nearly a hundred million downloads). The original combination of WS_FTP LE and WS_FTP Professional provided one of the most successful implementations of the now-common “freemium” software business plan ever.

See also “WS_FTP Professional” and “WS_FTP Home”.  WS_FTP LE may be obtained from www.wsftple.com.

FULL DISCLOSURE: The president of File Transfer Consulting pushed through the retirement of WS_FTP Home and the return of WS_FTP LE while serving as VP of Product Management at Ipswitch.


WS_FTP Professional

WS_FTP Professional is a commercial file transfer client for Windows desktops.  It offers a two-panel user interactive interface and batch scripts that can be scheduled with Windows scheduler.  Supported protocols include FTP/S and SFTP, plus proprietary HTTPS connections to MOVEit DMZ.

BEST PRACTICE: The WS_FTP clients still constitute one of the most popular desktop FTP brands in the market today.  All credible file transfer applications should support file transfers with WS_FTP Professional over the FTP/S and SFTP protocols.

X

X.509 Certificate

An X.509 certificate is a high-security credential used to encrypt, sign and authenticate transmissions, files and other data.  X.509 certificates secure SSL/TLS channels, authenticate SSL/TLS servers (and sometimes clients), encrypt/sign SMIME, AS1, AS2, AS3 and some “secure zip” payloads, and provide non-repudiation to the AS1, AS2 and AS3 protocols.

The relative strength of various certificates is often compared through their “bit length” (e.g., 1024, 2048 or 4096 bits) and longer=stronger.  Certificates are involved in asymmetric cryptography so their bit lengths are often 8-10x longer (or more) than the bit length of the symmetric keys they are used to negotiate (usually 128 or 256 bits).

Each certificate either is signed by another certificate or is “self signed”.  Certificates that are signed by other certificates are said to “chain up” to an ultimate “certificate authority” (“CA”) certificate.   CA certificates are self-signed and typically have a very large bit length.  (If a CA certificate was ever cracked, it would put thousands if not millions of child certificates at risk.)  CAs can be public commercial entities (such as Verisign or GoDaddy) or private organisations.

Most web browsers and many file transfer clients ship with a built-in collection of pre-trusted CA certificates.  Any certificate signed by a certificate in these CA chains will be accepted by the clients without protest, and the official CA list may be adjusted to suit a particular installation.

In addition, many web browsers also ship with a built-in and hard-coded collection of pre-trusted “Extended Validation” CA certificates.  Any web site presenting a certificate signed by an extended validation certificate will cause a green bar or other extra visual clue to appear in the browser’s URL.  It is not typically possible to edit the list of extended validation certificates, but it is usually possible to remove the related CA cert from the list of trusted CAs to prevent these SSL/TLS connections from being negotiated.

X.509 certificates are stored by application software and operating systems in a variety of different places.  Microsoft’s Cryptographic Providers make use of a certificate store built into the Microsoft operating systems.  OpenSSL and Java Secure Sockets Extension (JSSE) often make use of certificate files.

Certificates may be stored on disk or may be burned into hardware devices.  Hardware devices often tie into participating operating systems (such as hardware tokens that integrate with Microsoft’s Certificate Store) or application software.

The most widespread use of X.509 hardware tokens may be in the U.S. Department of Defence (DoD) CAC card implementation.  This implementation uses X.509 certificates baked into hardware tokens to identify and authenticate individuals through a distributed Active Directory implementation.  CAC card certs are signed by a U.S. government certificate and bear a special attribute (Subject Alternative Name, a.k.a. “SAN”) that is used to map individual certificates to records in the DoD’s Active Directory server via “userPrincipalName” AD elements.