diff --git a/doc-examples.md b/doc-examples.md index 15f2251..c9f6171 100644 --- a/doc-examples.md +++ b/doc-examples.md @@ -161,7 +161,7 @@ java -jar fdt.jar -tp -p -agent ``` java -jar fdt.jar -dIP -dp -sIP -p -d /tmp/destination/files -fl /tmp/file-list-on-source.txt -coord ``` -- Retrieving session log file. +- Retrieving session log file. To retrieve session log file user needs to provide at least these parameters: diff --git a/doc-fdt-ddcopy.md b/doc-fdt-ddcopy.md index 64fd307..d5b2889 100644 --- a/doc-fdt-ddcopy.md +++ b/doc-fdt-ddcopy.md @@ -9,12 +9,12 @@ * **SCP**: java -jar fdt.jar [ OPTIONS ] [[[user@][host1:]]file1 [[[user@][host2:]]file2 * **Coordinator**: java -jar fdt.jar [OPTIONS] -dIP \ -dp \ -sIP \ -p \ -d \ [-fl \] -coord * **List Files**: java -jar fdt.jar [OPTIONS] -c \ -ls \ -* **Agent**: java -jar fdt.jar [OPTIONS] -c \ -tp \ -agent +* **Agent**: java -jar fdt.jar [OPTIONS] -c \ -tp \ -agent * **Session log**: java -jar fdt.jar [OPTIONS] -c \ -d \ -sID \ -In Server mode the FDT will start listening for incoming client connections. The server may or may not stop after the last client finishes the transfer. In Client mode the client will connect to the specified host, where an FDT Server is expected to be running. The client can either read or write file from/to the server. +In Server mode the FDT will start listening for incoming client connections. The server may or may not stop after the last client finishes the transfer. In Client mode the client will connect to the specified host, where an FDT Server is expected to be running. The client can either read or write file from/to the server. -In the SCP (Secure Copy) mode the local FDT instance will use SSH to start/stop the FDT server and/or client. The security is based on ssh credentials. The server started in this mode will accept connections **ONLY** from the "SCP" client. It is possible to restrict the access for the FDT Servers started from the command line using the -f option. The option accepts a list of IP addresses separated by ':'. +In the SCP (Secure Copy) mode the local FDT instance will use SSH to start/stop the FDT server and/or client. The security is based on ssh credentials. The server started in this mode will accept connections **ONLY** from the "SCP" client. It is possible to restrict the access for the FDT Servers started from the command line using the -f option. The option accepts a list of IP addresses separated by ':'. In order to use third party copy feature with FDT there have to be two FDT launched in agent mode. In Agent mode the FDT will start listening for incoming client connections on . In Agent mode the client will listen for coordinator message with task. After receiving coordinator message Agent will try to send message to destination Agent requesting to open socket for transfer session. Destination Agent will take one transfer port from pool and open port for that session and then informs source Agent that transfer job can be started. At this pont first agent now has session ID and it sends it to the coordinator, that later coordinator could see that FDT session log file from remote Agent. After finishing task Agent will close transfer port and return it to the transfer ports pool. @@ -36,7 +36,7 @@ The OPTIONS currently supported may be server or client specific, or may be used **-bio** Blocking I/O mode. n this mode every channel (socket) will be configured to send/receive data synchronously and FDT will use one thread per channel. By default, non-blocking I/O will be used. On some platforms/systems the throughtput can be slightly higher in blocking I/O mode. The limitation in the blocking mode is the maximum number of threads that can be used and, for very high numbers of streams (thousands), the CPU used by the kernel for scheduling the threads. By default, FDT will use non-blocking mode. -**-iof \** Non-blocking I/O retry factor. In non-blocking mode every read/write operation which returns 0, will be repeated up to times before waiting for I/O readiness. By default this value is set to 1, which means that every network read/write operation will return in the select() (which can also be poll()/epoll()) if no more data can be processed by the underlying channel(socket). The default value should work fine on most of the systems, but values of 2 or 3, may increase the throughput on some systems. Values higher than 5 will only increase the CPU system usage, without any gain in performance. +**-iof \** Non-blocking I/O retry factor. In non-blocking mode every read/write operation which returns 0, will be repeated up to times before waiting for I/O readiness. By default this value is set to 1, which means that every network read/write operation will return in the select() (which can also be poll()/epoll()) if no more data can be processed by the underlying channel(socket). The default value should work fine on most of the systems, but values of 2 or 3, may increase the throughput on some systems. Values higher than 5 will only increase the CPU system usage, without any gain in performance. **-limit \** Restrict the transfer speed at the specified rate. K (KiloBytes/s), M (MegaBytes/s) or G (GigaBytes/s) may be used as suffixes. When this parameter is specified in the server it represents the maximum transfer rate for every FDT session. If the parameter is specified in both the server and the client, the minimum value between them will be used. @@ -46,11 +46,11 @@ The OPTIONS currently supported may be server or client specific, or may be used **-v** Verbose. Multiple 'v'-s (up to three) may be used to increment the verbosity level. Maximum level is three (-vvv) which corresponds to Level.FINEST for the standard Java logging system used by FDT. -**-u, -update** Update. If a newer version of fdt.jar is available on the update server it will update the local copy +**-u, -update** Update. If a newer version of fdt.jar is available on the update server it will update the local copy **Server options :** -**-S** disable the standalone mode; when specified the FDT Server will stop after the last client finishes. By default, the server will continue to listen for incoming clients. This option is automatically passed to the server started in "SCP" mode. +**-S** disable the standalone mode; when specified the FDT Server will stop after the last client finishes. By default, the server will continue to listen for incoming clients. This option is automatically passed to the server started in "SCP" mode. **-bs \** Size for the I/O buffers. K (KiloBytes) or M (MegaBytes) may be used as suffixes. The default value is 512K. If the number of clients or sockets is expected to be very high is better to decrease this value. The memory used by this buffers is directly mapped in the operating system memory pages. The memory used by this buffers is limited by the JVM and can be increased passing -XX:MaxDirectMemorySize=m (e.g -XX:MaxDirectMemorySize=256m) to the 'java' command @@ -60,17 +60,17 @@ The OPTIONS currently supported may be server or client specific, or may be used **-c \** connect to the specified host. If this parameter is missing the FDT will become server -**-gsissh** used in the Secure Copy Mode to specify GSI authentication instead of normal SSH authentication scheme. The remote sshd server must support GSI authentication. +**-gsissh** used in the Secure Copy Mode to specify GSI authentication instead of normal SSH authentication scheme. The remote sshd server must support GSI authentication. -**-d \** The destination directory used to copy files. +**-d \** The destination directory used to copy files. -**-fl \** a list of files. Must have only one file per line. +**-fl \** a list of files. Must have only one file per line. -**-pull** Pull mode. The client will receive the data from the server. +**-pull** Pull mode. The client will receive the data from the server. -**-N** disable Nagle algorithm +**-N** disable Nagle algorithm -**-ss \** Set the TCP SO_SND_BUFFER size. M and K may be used as suffixes for Kilo/Mega. +**-ss \** Set the TCP SO_SND_BUFFER size. M and K may be used as suffixes for Kilo/Mega. **-P \** Number of paralel streams to use. Default is 4. @@ -86,9 +86,9 @@ Agent can use both Server and Client options too, because at any time Agent can **Common options used for FDT Coordinator mode :** -**-d \** The destination directory used to copy files. +**-d \** The destination directory used to copy files. -**-fl \** a list of files. Must have only one file per line. +**-fl \** a list of files. Must have only one file per line. **-dIP \** destination Agent IP address. @@ -112,9 +112,9 @@ Agent can use both Server and Client options too, because at any time Agent can **-sID \** session ID retrieved from coordinator. -**-d \** The destination directory used to copy session log file. +**-d \** The destination directory used to copy session log file. + - ### DDCopy **DDCopy** is very similar to Unix `dd` command and can be used to test the local disks or file system. It is bundled in the fdt.jar and has the following syntax: diff --git a/doc-security.md b/doc-security.md index b623171..90c0721 100644 --- a/doc-security.md +++ b/doc-security.md @@ -22,10 +22,10 @@ In this mode the server activates a simple IP-based firewall where each source I By default FDT starts allowing clients from any destination. To enable this mode, pass the "-f" option when starting FDT server: --f , where allowedIPsList: A list of IP addresses allowed to connect to the server. +-f , where allowedIPsList: A list of IP addresses allowed to connect to the server. Multiple IP addresses may be separated by ':'. You can use CIDR notation to specify an entire subnet. - + `However, please note that this mode does not enable any privacy or confidentiality on client-server control channel and it may be subject to source IP spoofing.` IP filtering can be used together with other authentication schemes. @@ -129,7 +129,7 @@ The clients connecting to the server are authenticated using the current environ **default location:** /etc/grid-security/certificates override with X509_CERT_DIR environment variable By default, the authorization of users is based on grid-mapfile file available in the current Globus installation: - **default** /etc/grid-security/grid-mapfile or + **default** /etc/grid-security/grid-mapfile or Override with GRIDMAP java property or environment variable Other authorization modules may be plugged-in in the FDT server by specifying : -Dgsi.authz.Authorization=customAuthzPluginClass diff --git a/doc-system-tuning.md b/doc-system-tuning.md index 78df78c..bf42570 100644 --- a/doc-system-tuning.md +++ b/doc-system-tuning.md @@ -24,7 +24,7 @@ We suggest to use newer linux distributions, or if this is not possible, update ```net.ipv4.tcp_moderate_rcvbuf = 1``` After adding them just run the following commabd as root: - + ```#sysctl -p /etc/sysctl.conf``` The settings above will set a maximum of 8 MBytes buffers. diff --git a/internet2-demo.md b/internet2-demo.md index cd05783..1b1001e 100644 --- a/internet2-demo.md +++ b/internet2-demo.md @@ -91,7 +91,7 @@ The command for the local client will be. On VM1 ``` -[local computer]$ java -jar fdt.jar -pull -r -c $SERVER2 -d ./share /usr/share +[local computer]$ java -jar fdt.jar -pull -r -c $SERVER2 -d ./share /usr/share ``` _Recursive copying in SCP mode_ @@ -120,7 +120,7 @@ ON VM1 **Testing network connectivity** -To test the network connectivity one can start a transfer of data from /dev/zero on the server (VM2) to /dev/null on the client (VM1) using 10 streams in blocking mode, for both the server and the client with 8 MBytes buffers. The server will stop after the test is finished +To test the network connectivity one can start a transfer of data from /dev/zero on the server (VM2) to /dev/null on the client (VM1) using 10 streams in blocking mode, for both the server and the client with 8 MBytes buffers. The server will stop after the test is finished On the VM2 (server): ``` @@ -179,7 +179,7 @@ java -jar fdt.jar -tp -p -agent ``` java -jar fdt.jar -dIP -dp -sIP -p -d /tmp/destination/files -fl /tmp/file-list-on-source.txt -coord ``` -- Retrieving session log file. +- Retrieving session log file. To retrieve session log file user needs to provide at least these parameters: diff --git a/perf-disk-to-disk.md b/perf-disk-to-disk.md index d38adf6..2eebc61 100644 --- a/perf-disk-to-disk.md +++ b/perf-disk-to-disk.md @@ -3,7 +3,7 @@ [Disk to Disk] [[Memory to Memory](perf-memory-to-memory.md)] [[SC06](perf-sc06.md)] [[SC08](perf-sc08.md)] [[SC09](perf-sc09.md)] ### FDT Disk To Disk I/O Performance over WAN - + **1. Disk Servers with hardware RAID controllers** This performance test was done using two disk servers between CERN and Caltech (RTT ~ 170 ms). Each system used a 10Gb/s network card (The system at Caltech has a Myricom card and the system at CERN has a Neterion card) @@ -38,10 +38,10 @@ Figure 3. The CPU utilization for the receiving server If we used only one RAID controller in the data transfer on each server the total transfer rate in shown in Figure 4. In this case the mean total throughput is ~ 2.6 Gb/s or 325 MB/s. -![Figure 4. The total network throughput for a Disk to disk transfer between CERN - Caltech, when only +![Figure 4. The total network throughput for a Disk to disk transfer between CERN - Caltech, when only one RAID controller was used on both servers](/img/figure4.png) -Figure 4. The total network throughput for a Disk to disk transfer between CERN - Caltech, when only +Figure 4. The total network throughput for a Disk to disk transfer between CERN - Caltech, when only one RAID controller was used on both servers ##### 2. Simple Servers diff --git a/perf-memory-to-memory.md b/perf-memory-to-memory.md index ea1e831..762ec55 100644 --- a/perf-memory-to-memory.md +++ b/perf-memory-to-memory.md @@ -26,7 +26,7 @@ The throughput in each direction was very stable at ~ 9.2 GB/s (Figure 2). The C Figure 2. The throughput between C1-NY sender C1-GVA receiver. The TCP buffer size was set to 2MB and we used 35 steams. The RTT is 93ms. ![CPU utilization for the sender and receiver](/img/figure3-m2m.png) - + Figure 3. CPU utilization for the sender and receiver. ##### Transfers in both directions with a pair of servers @@ -35,7 +35,7 @@ We used one pair (C1-NY and C1-GVA) to concurrently send and receive data on the Perhaps the limitation is due to the PCI express bus access to the memory. Testing the throughput on a .localhost. is very close to the aggregated traffic obtained in this test. ![The throughput in both directions between C1-NY C1-GVA. The TCP buffer size was set to 2MB and we used 20 steams for each transfer. The RTT is 93ms](/img/figure4-m2m.png) - + Figure 4. The throughput in both directions between C1-NY C1-GVA. The TCP buffer size was set to 2MB and we used 20 steams for each transfer. The RTT is 93ms. ##### Transfers in both directions with two pairs of servers @@ -43,7 +43,7 @@ Figure 4. The throughput in both directions between C1-NY C1-GVA. The TCP buffer One pair of servers (C1-NY and C1-GVA) was used to send data from MANLAN to CERN and the other one to send data from CERN to MANLAN. The measured traffic in the MANLAN router is shown in Figure 5. The total throughput in each direction was quite stable. The Traffic from CERN to MANLAN was ~ 9.2Gb/s and the traffic from MANLAN to CERN was ~ 9Gb/s. ![The throughput in both directions between two pairs of servers. The TCP buffer size was set to 2MB and we used 35 steams for each transfer. The RTT is 93ms](/img/figure5-m2m.png) - + Figure 5. The throughput in both directions between two pairs of servers. The TCP buffer size was set to 2MB and we used 35 steams for each transfer. The RTT is 93ms ##### Results for Memory to Memory transfers in LAN @@ -51,5 +51,5 @@ Figure 5. The throughput in both directions between two pairs of servers. The TC The data transfer between the two systems at CERN (C1-GVA to C2-GVA) or MANLAN (C1-NY to C2-NY) runs very close to the theoretical limit of 10Gb/s (Figure 6 ) and is stable. We used 3 streams with 2MB TCP buffer size. ![The throughput in LAN between two servers](/img/figure6-m2m.png) - + Figure 6. The throughput in LAN between two servers. diff --git a/perf-sc08.md b/perf-sc08.md index adbd6b2..65ee3b4 100644 --- a/perf-sc08.md +++ b/perf-sc08.md @@ -9,6 +9,6 @@ The record-setting demonstration was made possible through the use of twelve 10 ### 100G test with Ciena -Second major milestone was achieved by the HEP team working together with Ciena, who had just completed its first OTU-4 (112 Gbps) standard link carrying a 100 Gbps payload (or 200 Gbps bidirectional) with forward error correction. The Caltech and Ciena teams used an optical fiber cable with ten fiber-pairs linking their neighboring booths, Ciena’s system to multiplex and demultiplex ten 10 Gbps links onto the single OTU-4 wavelength running on an 80 km fiber loop, and some of Caltech’s nodes used in setting the wide area network records together with FDT, to achieve full throughput over the new link. Thanks to FDT’s high throughput capabilities, and the error free links between the booths, the teams were able to achieve a maximum of 199.90 Gbps bi-directionally (memory-to-memory) within minutes of the start of the test, and an average of 191 Gbps during a 12 hour period that logged the transmission of 1.02 Petabytes overnight. +Second major milestone was achieved by the HEP team working together with Ciena, who had just completed its first OTU-4 (112 Gbps) standard link carrying a 100 Gbps payload (or 200 Gbps bidirectional) with forward error correction. The Caltech and Ciena teams used an optical fiber cable with ten fiber-pairs linking their neighboring booths, Ciena’s system to multiplex and demultiplex ten 10 Gbps links onto the single OTU-4 wavelength running on an 80 km fiber loop, and some of Caltech’s nodes used in setting the wide area network records together with FDT, to achieve full throughput over the new link. Thanks to FDT’s high throughput capabilities, and the error free links between the booths, the teams were able to achieve a maximum of 199.90 Gbps bi-directionally (memory-to-memory) within minutes of the start of the test, and an average of 191 Gbps during a 12 hour period that logged the transmission of 1.02 Petabytes overnight. ![FDT @ SC08 Image](/img/ciena_sc08_1.jpg)