Saturday, July 31, 2010

Bharath Operating System(BOSS)



BOSS (Bharat Operating System Solutions) GNU/Linux distribution developed by C-DAC (Centre for Development of Advanced Computing) derived from Debian for enhancing the use of Free/ Open Source Software throughout India. BOSSGNU/Linux - a key deliverable of NRCFOSS has upgraded from Entry level server to advanced server. It supports Intel and AMD x86/x86-64 architecture. BOSS GNU/Linux advanced server has unique features such as Web server, proxy server, Database server, Mail server, Network server, File and Print server, SMS server, LDAP server. BOSS GNU/Linux advanced server is comprised with administration tool such as webmin which is a web based interface, Gadmin, PHP myadmin, PHP LDAP admin, PG admin.

BOSS GNU/Linux Version 3.0 is coupled with GNOME and KDE Desktop Environment with wide Indian language support & packages, relevant for use in the Government domain. Currently BOSS GNU/Linux Desktop is available in almost all the Indian Languages such as Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Sanskrit, Tamil, Telugu, Bodo, Urdu, Kashmiri, Maithili, Konkani, Manipuri which will enable the mainly non-English literate users in the country to be exposed to ICT and to use the computer more effectively.


The accessibility of BOSS Linux will have a constructive impact on the digital divide in India as more people can now have access to software in their local language to use the Internet and other information and communications technology (ICT) facilities. Community Information centers (CIC’s) and internet cafes will also benefit from BOSS GNU/Linux as this software can be utilized to power these outlets and is affordable and easy to install, use and support.

BOSS 3.0
Company / developer :NRCFOSS / CDAC, India
OS family :Unix-like
Working state :Current
Source model :Free and open source software
Initial release :January 10, 2007 (2007-01-10)
Latest stable release :3.0 / September 5, 2008
Marketing target :General purpose
Available language(s) :Multilingual (more than 18)
Package manager :dpkg
Supported platforms :i386, AMD64[1]
Kernel type :Monolithic (Linux)
Userland :GNU
Default user interface :GNOME and KDE[clarification needed]
License :GNU GPL & Various others
Official website :www.bosslinux.in

Super Computer in INDIA




C-DAC's HPCC (High Performance Computing and Communication) initiatives are aimed at designing, developing and deploying advanced computing systems, tools and technologies that impact strategically important application areas.

Fostering an environment of innovation and dealing with cutting edge technologies, C-DAC's PARAM series of supercomputers have been deployed to address diverse applications in science and engineering, and business computing at various institutions in India and abroad.

C-DAC's commitment to the HPCC initiative has once again manifest as a deliverable through the design, development and deployment of PARAM Padma, a terascale supercomputing system.

PARAM Padma is C-DAC's next generation high performance scalable computing cluster, currently with a peak computing power of One Teraflop. The hardware environment is powered by the Compute Nodes based on the state-of-the-art Power4 RISC processors, using Copper and SOI technology, in Symmetric Multiprocessor (SMP) configurations. These nodes are connected through a primary high performance System Area Network, PARAMNet-II, designed and developed by C-DAC and a Gigabit Ethernet as a backup network.

The PARAM Padma is powered by C-DAC's flexible and scalable HPCC software environment. The Storage System of PARAM Padma has been designed to provide a primary storage of 5 Terabytes scalable to 22 Terabytes. The network centric storage architecture, based on state-of-the-art Storage Area Network (SAN) technologies, ensures high performance, scalable and reliable storage. It uses Fibre Channel Arbitrated Loop (FC-AL) based technology for interconnecting storage subsystems like Parallel File Servers, NAS Servers, Metadata Servers, Raid Storage Arrays and Automated Tape Libraries, achieving an I/O performance of upto 2 Gigabytes/Second.

The Secondary backup storage subsystem is scalable from 10 Terabytes to 100 Terabytes with an automated tape library and support for DLT, SDLT and LTO Ultrium tape drives. It implements a Hierarchical Storage Management (HSM) technology to optimize the demand on primary storage and effectively utilize the secondary storage.

The PARAM Padma system is also accessible by users from remote locations.
An overview of PARAM Padma – A Teraflop Computing System PARAM Padma is a teraflop cluster having 62 number of 4 way SMP and one 32 way SMP node, equivalent to 280 POWER4 RISC processors in SMP nodes interconnected with C-DAC’s own developed proprietary PARAMNet system area network technology as shown in the Figure 1. The theoretical peak performance of complete configuration is 1.13 teraflops and these nodes are connected through a primary high performance system area network, PARAMNet-II and Gigabit Ethernet as a backup network.

Each node is a 4 -way SMP supporting four 1GHz POWER4 RISC processors and the aggregate memory for each compute node is 8 gigabytes. Each processor core has a 16KB L1 cache with a latency of 4ns to 6ns and two processor cores share a 1.41MB L2 cache with a latency of 9ns to 14ns. Four processors share a common 128MB L3 cache. PARAM Padma has 6 file servers; each is of UltraSparc-III processors in 4-way SMP configuration and the aggregate memory for each is 16 Gigabytes.

Major components of PARAMNet-II are SAN Switch (16 Ports), PARAMNet-II Network Interface Card (NIC) with C-DAC’s Communication Co-Processor (CCP-III) and C-DAC’s Virtual Interface Provider Library (C -VIPL), part of C-DAC HPCC suit of software tools as shown in the Figure 2 and Figure 3. PARAMNet-II network comprises of N hosts connected in non-blocking fat tree topology. PARAMNet-II switch is based on non-blocking crossbar-based architecture and it supports 8/16-ports providing 2.5 Gbps, full duplex raw bandwidth per port (2 Gbps, full duplex data bandwidth). The non-blocking architecture of the switch allows multilevel switching for realizing a large cluster. The switch offers very low latency of order 0.5 msec and it uses interval routing scheme and group adaptive routing based on Least Recently Used (LRU) algorithm to ensure uniform bandwidth distribution.

A single switch can support up to eight (SAN-SW8) or sixteen (SAN-SW16) hosts. For supporting more hosts, a multistage network is adopted and this network can be made non-blocking or blocking type, to be decided by usability of switches and cost/performance. Various topologies are possible by modifying the routing tables of the switches.


Figure 3: C-DAC’s Virtual Interface Provider (C-VIPL)

Two types of configurations of PARAM Padma are made for multi-level switching of PARAMNet-II switches. In the first type configuration, five first level switches (SAN-SW16) and one second level (SAN-SW16) switch has been employed to configure the cluster. The second type configuration involves twelve switches and is fully non-blocking. These are split up in eight first level and four-second level switches. There are no bottlenecks in this topology and the bisectional bandwidth scales with the number of nodes. For entire cluster of 62-nodes, bisectional bandwidth available is 4 Gbytes/sec. For both configurations, latencies associated with packet routing are very small (~1.5 msec for three levels of switching).


The NIC card is based on CCP-III chip using 0.15 micron 1 million gate technology. The NIC card provides interface to SAN SW8 port and SAN SW16 port switches and it supports 2.5.Gbps (fiber) links and host interface of PCI 2 .2 64 bit/66 MHz. The NIC card supports connection oriented (VIA), and connectionless (AM) protocols. The CCP has been designed to reduce software latency and increase the data throughput, which are the main parameters for good and effective communication. It avoids unnecessary copying of data either by directly delivering the message into the destination buffer or by copying it to page-aligned temporary area from where kernel can remap, thereby reducing the number of copies.

C-DAC’s HPCC suit of software tools on PARAM Padma effectively address the performance and usability challenges of clusters through a high performance communication protocols and a rich set of program development, system management and software engineering tools. KSHIPRA, communication substrate designed to support low latency and high bandwidth is the key to the high level of aggregate system performance and scalability of C -DAC HPCC software. C-VIPL is a part of KSHIPRA, scalable communication substrate for cluster of multiprocessors designed to support low latency and high bandwidth and high level of aggregate system performance. C-VIPL is an application program interface for PARAMNet-II and it adheres to VIA Specification version 1. It supports diverse operating systems such as AIX, Linux, Solaris and Windows. C-VIPL is compatible with C-MPI and MVICH, which is an MPICH implementation of MPI for Virtual Interface Architecture (VIA). HPCC software also provides low overhead communication, optimized MPI (C-MPI) and a Parallel File System (PFS) with MPI-IO interface to enable applications to scale on large clusters. Included in the HPCC software suites of products are high performance compilers, parallel debuggers, data visualisers, performance profilers and cluster monitoring and management tools.

C-MPI is a high performance implementation of the MPI standard for a Cluster of Multi Processors (CLUMPS). C-MPI also leverages on the fact that most of the high performance networks provides substantial exchange communication bandwidths. This allows the tuned algorithms to simultaneously send and receive messages over the network, which helps in reducing the number of communication hops. In addition, the algorithms effectively use the higher shared memory communication bandwidths on multi processor nodes. Also, C-MPI takes care of SMP features, which uses directly memory copy instead of going through an intermediate shared space and network. This is critical to improvement of MPI communication performance on PARAM Padma.

C-PFS a client-server and user-level parallel files system, addresses the high I/O throughput. Exporting MPI-IO interfaces for parallel programming and UNIX interface for system management, the C-PFS fully exploits the concurrent data paths between the compute nodes and the terabytes of storage in PARAM Padma. The storage system of PARAM Padma has been designed to provide a primary storage of 5 terabytes, which is scalable to 22 terabytes. The network centric storage architecture has been used, which is based on state-of-art- Storage Area Network (SAN) technologies and ensures a high performance, scalable and reliable storage. It uses Fiber Channel Arbitrated Loop (FC-AL) based technology for interconnecting storage subsystems like parallel tape libraries, achieving an I/O performance of upto 2 Gigabytes/second. The secondary backup storage subsystem is scalable from 10 terabytes to 100 terabytes with an automated tape lib rary features. It implements hierarchical storage management (HSM) to optimize the demand on primary storage and effectively utilize the secondary storage.

The industrial design and packaging of PARAM Padma offers flexibility to scale from a system with a few nodes to systems having a large number of nodes as shown in Figure 1. The PARAM Padma enclosure has been designed taking into consideration environmental requirements of high performance computing sub-assemblies, such as heat transfer, electromagnetic interference / compatibility. It is 19 inch 48 U standard rack with shielded cable trays and accessories for housing compute nodes, file servers, network switches and cables.

Application and System Benchmarks

Several characteristics of various application and system benchmarks are considered while designing and development of PARAM Padma in-order reduce the cost of communication from hardware, and also from software point of view. Considered here is the first type configuration of PARAM Padma and HPCC software over PARAMNet as parallel programming environment for execution of benchmarks. Gigabit Ethernet interconnect with IBM MPI programming environment is also considered for execution of several benchmarks, in addition.

Macro and Micro benchmarks have been used to test and extract the sustained performance of PARAM Padma. P-COMS (PARAM - Communication Overhead Measurement Suites – version 1.1.1) – a set of test suites has been used to model the performance of MPI point-to-point and collective communications on PARAM Padma. These suites compare the performance of point-to-point communications, including send and receive overheads for different send and receive modes and different (contiguous) message lengths used, as well as estimate the network latency and bandwidth. It has been observed that latencies are as low as 15 – 20 ms and the bandwidth is 160 MB/s on PARAMNet with HPCC software. A comparative study of measured communication overhead times for different system-area networks such as PARAMNet with HPCC software and Gigabit with IBM MPI indicates that the overheads are very less for MPI communication primitives.

NPB (NAS Parallel Benchmarks) is a collection of benchmarks to test the performance of PARAM Padma. NAS 2.3 constitutes eight CFD problems, coded in MPI and Standard C and Fortran 77/90. The LU kernel of NAS makes a triangular factorization of a matrix and it involves sending of small messages less than 100 bytes. Initial experiments indicate that the execution time for LU class C size problems decreases linearly upto 62 processors of PARAM Padma with PARAMNet-II and HPCC software. The tuning and optimization of the code is carried out.

HPL (High Performance LINPACK) for TOP500 Super Computers List is a popular benchmark suite to evaluate the capabilities of Super Computers and Clusters. The results of that benchmark are published semi-annually in the Top500 list of the world’s most powerful computers. The benchmark involves solving a system of linear equations. The impact of HPL performance mainly depends on performance of underlying communication network and the tasks executing at the different nodes of cluster, shared memory implementation of MPI, and the quality of process-or-to-processor mapping. The higher bandwidth and lower latency of the high performance PARAMNet switch network with C-DAC’s HPCC software has resulted in better performance in comparison with gigabit interconnect. The results of HPL benchmark on 32, and 64 processors reveal minor improvement on PARAMNet architectu re with HPCC software, in comparison to gigabit ethernet with IBM MPI. However, when the number of nodes is increased, the performances of HPL on PARAMNet with HPCC software shows substantive improvement over gigabit interconnect with IBM MPI. The Top-500 test with optimal parameters on 62 node (248 processors) configuration has resulted in approximately 532 Gflops to the peak performance of 992 Gflops on PARAM Padma. The HPL performance thus is approximately 53.6 % of peak performance. PARAMNet has been found to perform much better due to its low latency and high bandwidth in all HPL tests and scales very well upto 62 nodes (248 processors).

Scientific and Engineering Applications on PARAM Padma

Real life complex application problems and scientific and engineering research are the driving force behind the development of PARAM Padma. Many applications in critical Scientific and Engineering fields like Bioinformatics, Computational Structural Mechanics, Computational Atmospheric Sciences, Seismic Data Processing, Computational Fluid Dynamics, Evolutionary Computing and Computational Chemistry have been executed on PARAM Padma. In the following paragraphs, some of the activities in the areas that are pursued in high performance computing at C-DAC are described.

In Bio-informatics, realistic simulation of large biomolecules using molecular codes like AMBER, CHARRM, & GROMACS have been ported on PARAM Padma. Figure 5 gives the results of ten nanoseconds simulation done using AMBER. Also, in-house development of problem solving environment has been developed so that biologists can use the system for executing the codes like AMBER, CHARMM, FASTA, BLAST (parallel versions of these are ported) with a simple interface that shields the user from the intricacies of parallel computing.

Developments in Computational Structural Mechanics include FEMCOMP for stress analysis of FRP composite structures, NONLIN for stability analysis of thin walled structures and FRACT3D, parallel fracture mechanics software based on domain decomposition.



In Computational Atmospheric Sciences, Climate System model (CCSM2) for climate change simulations and Mesoscale Model (MM5) for Sciences, Climate System model (CCSM2) for climate at one-kilometer resolution are implemented. Figure 6 gives these results obtained using MM5 on PARAM Padma. The pre-processors to ingest the Indian meteorological data into the MM5 modeling system for regional weather forecast has been developed at C-DAC. The capability for running long climate simulation for CCSM2 on PARAM Padma is available.

Under Seismic Data Processing activities, a parallel seismic modeling and migration package (WAVES) for oil and natural gas exploration has been developed on PARAM Padma. This parallel software is focused on implementation of high precision 3-D seismic migration and modeling algorithms, and our experiments indicate that the software is scalable to a very large number of processors.

Major activities in Computational Fluid Dynamics include simulations of hypersonic flow for re -entry vehicle, fuel flow characteristics of an IC engine and a general 2-D Navier-Stokes solver. Figure 7 gives the performance of 2-D Navier-Stokes solver on PARAM Padma up to 248 processors, showing good scalability.



In the evolutionary computing area, development of parallel genetic algorithms based methodologies for protein structure prediction, multiple sequence alignment and financial modeling has been carried out. In Computational Quantum Chemistry, an indigenous package called INDMOL for electronic structure and molecular properties have been developed and benchmarked on PARAM Padma. GAMESS a widely used public domain code has also been ported.

C-DAC's Tera-Scale Supercomputing Facility (CTSF)

While the need and usefulness of high performance supercomputing in Business as well as Scientific and Engineering applications is unquestioned and is growing rapidly, it is not economically viable to have many such facilities at all user sites. Recognizing such a need, C-DAC had earlier set up a National PARAM Supercomputing Facility (NPSF) at Pune, housing its earlier generation PARAM 10000, a 100 Gflops peak computing power system. C-DAC recently established C -DAC's Tera-Scale Supercomputing Facility (CTSF), Bangalore that houses PARAM Padma as shown in Figure 8. Many premier research organizations have been using these facilities and encouraging performance is being reported for several industrial and scientific applications.



The primary objectives of CTSF are:

* To provide high performance computing facilities and services for the
scientific and research community and for the enterprise.

* To establish the technological capabilities in high performance computing that
have hitherto been confined only to developed countries.

* To solve some of the grand challenge problems which are the key to
economic growth, environmental understanding and research breakthroughs
in science and engineering.

Users can opt for one or more of the following options to access the CTSF resources remotely:

* Establishing a 56.6 Kbps dialup link over PSTN (Public Switched Telephone
Network).

* Establishing a dedicated 128 Kbps link over ISDN (Integrated Services Digital
Network).

* Establishing a 64 Kbps leased line terrestrial circuit between remote locations
and C-DAC

* Providing a secure login via the Internet.

Conclusions

From hardware and system software point of view, our experiences in making teraflops power PARAM Padma allowed us to understand the scalability issues of high-performance system area network PARAMNet-II and its associated HPCC suit of system softwares. The PARAMNet switch architecture enables low-cost, high performance implementations because of its functional simplicity. The results of HPL benchmark used in the competition for the Top-500 list shows an efficiency of 53.6 % on 62 nodes. Further, developments in these areas are in progress.

Many research and development organizations and academic institutions in India are actively involved in making small clusters using off-the shelf hardware and software components. Development of PARAM series of supercomputers has enabled tackling large scientific problems that need very large clusters. Many PARAM series of super computers have been deployed in leading premier institutes in India and a few outside, on various Parallel Computing co llaborative projects. C-DAC’s recently established C-DAC’s Terascale Supercomputing Facility (CTSF), which houses PARAM Padma, is open to several scientists and research workers in India and outside.

Online Shopping Guide




Online shopping has many advantages for all type of Customers as compared to other type of shopping. The most important things are saving a valuable time and Money. It gives good service and fast and easy payment services. If you are going to buy a new products nearest you're shopping mall then just think about online shopping.



Online shopping? Why online shopping is simple solution? Let me explain you.
1) It gives you huge catalogue of products which contain thousands of pages as compared to your 2 to 3 page of catalogue.
2) Within 2 to 3 hours you can see thousand of products at ones as compared to you’ll spend a lot of time or whole day for seeing only 10-50 products.
3) It gives you complete information about products and a service as compared to you gives your shopkeeper in shopping mall.
4) It provides a new type of huge shopping mall at your desk or in your PC/Laptop. Therefore no need to go outside and waist your valuable time.
5) It gives you fast, easy and secure payment facility and there is no need to stand at queue for pay.
6) It gives you money back guarantee and discounts.
7) It gives you all types of branded products which are not present in your city or in your country.
8) And finally it saves you’re valuable time and money which you’ll use like a enjoying a vacations for anyone.
All above facility are providing through only by “Online shopping”. So start purchasing products online enjoy the life.

Give an look at these Site :

www.ebay.in
www.futurebazaar.com
www.indiashopping.exclventures.com
www.shopping.rediff.com
www.shopping.indiatimes.com
www.indiaplaza.in
www.indiavarta.com

Step IN and Feel the Advantage..

Friday, July 30, 2010

360 Panorama




Shooting panoramic photos with a mobile phone can be difficult. Often times it requires doing all the work in a software app when you get back from wherever you are, as well as trying to make sure that the phone's camera does not change its white balance or exposure between shots.

Occipital, the creators of the popular RedLaser scanning app (which wassold to eBay last month) have a new iPhone app debuting on Friday called 360 Panorama, which is attempting to change that. For $2.99, users can simply move their phone from left to right to capture a photo panorama. The end result is a single, panoramic photo that requires zero post-processing.

Behind the scenes the app is actually using the iPhone's video camera, which means that users will need a 3GS or the newer iPhone 4 to use it. The app also takes advantage of the iPhone 4's gyroscope hardware to help judge how quickly you're rotating, so it can figure out what needs to be captured and where you've already been. As it records imagery, it stitches together an image based on your movement, which you can see and track to make any angle corrections. Some modern day point and shoot cameras like Sony's Cyber-shot DSC-W370 are able to do the same thing, though with a larger end result.

Size and distortions are ultimately the two things that limit this app from being as useful as proper photo stitching software. The images it spits out are quite small when compared with the still shots your camera takes. You can see this in the two sample photos.
The larger problem is the distortion, which Occipital co-founder Vikas Reddy told me is made worse in indoor situations. His team is working on ways to make it better in a future release, but in the meantime shooting outdoors provides for a much smoother and less jaggy experience. Being in the urban jungle of downtown San Francisco, I wasn't able to fully test how well it would work on something like rolling hills or a forest, but as you can see from the shots above it does a fine job until you hit perfectly straight lines where the software is forced to make a stitch by guesswork.

These issues aside, 360 Panorama is an incredibly neat, and genuinely useful app. It may have no business taking over the job of a good crisp, and low distortion still image, but if you want to quickly capture an incredible amount of detail of the world around you, it's tough to beat.

Cloud Computing




Cloud computing is Internet-based computing, whereby shared resources, software, and information are provided to computers and other devices on demand, like the electricity grid. However, the analogy to utility computing is not entirely correct, as discussed here. Cloud computing is a paradigm shift following the shift from mainframe to client–server in the early 1980s. Details are abstracted from the users, who no longer have need for expertise in, or control over, the technology infrastructure "in the cloud" that supports them.[1] Cloud computing describes a new supplement, consumption, and delivery model for IT services based on the Internet, and it typically involves over-the-Internet provision of dynamically scalable and often virtualized resources.[2][3] It is a byproduct and consequence of the ease-of-access to remote computing sites provided by the Internet.[4] This frequently takes the form of web-based tools or applications that users can access and use through a web browser as if it were a program installed locally on their own computer.[5]. NIST provides a somewhat more objective and specific definition here. The term "cloud" is used as a metaphor for the Internet, based on the cloud drawing used in the past to represent the telephone network,[6] and later to depict the Internet in computer network diagrams as an abstraction of the underlying infrastructure it represents.[7] Typical cloud computing providers deliver common business applications online that are accessed from another Web service or software like a Web browser, while the software and data are stored on servers.

Most cloud computing infrastructures consist of services delivered through common centers and built on servers. Clouds often appear as single points of access for all consumers' computing needs. Commercial offerings are generally expected to meet quality of service (QoS) requirements of customers, and typically include SLAs. The major cloud service providers include Microsoft,[9] Salesforce, Skytap, HP, IBM, Amazon and Google.

Features

  • Agility improves with users' ability to rapidly and inexpensively re-provision technological infrastructure resources.[29]
  • Cost is claimed to be greatly reduced and capital expenditure is converted to operational expenditure[30]. This ostensibly lowers barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and fewer IT skills are required for implementation (in-house).[31]
  • Device and location independence[32] enable users to access systems using a web browser regardless of their location or what device they are using (e.g., PC, mobile). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.[31]
  • Multi-tenancy enables sharing of resources and costs across a large pool of users thus allowing for:
    • Centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
    • Peak-load capacity increases (users need not engineer for highest possible load-levels)
    • Utilization and efficiency improvements for systems that are often only 10–20% utilized.[25]
  • Reliability is improved if multiple redundant sites are used, which makes well designed cloud computing suitable for business continuity and disaster recovery.[33] Nonetheless, many major cloud computing services have suffered outages, and IT and business managers can at times do little when they are affected.[34][35]
  • Scalability via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads. Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface.[31] One of the most important new methods for overcoming performance bottlenecks for a large class of applications is data parallel programming on a distributed data grid.[36]
  • Security could improve due to centralization of data[37], increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels[38]. Security is often as good as or better than under traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford.[39] Providers typically log accesses, but accessing the audit logs themselves can be difficult or impossible. Furthermore, the complexity of security is greatly increased when data is distributed over a wider area and / or number of devices.
  • Maintenance cloud computing applications are easier to maintain, since they don't have to be installed on each user's computer. They are easier to support and to improve since the changes reach the clients instantly.
  • Metering cloud computing resources usage should be measurable and should be metered per client and application on daily, weekly, monthly, and annual basis. This will enable clients on choosing the vendor cloud on cost and reliability.