CS6712 Grid and Cloud Computing Laboratory

April 2, 2018 | Author: Suresh Babu Karunakaran | Category: Software Engineering, Computing, Technology, Areas Of Computer Science, Computer Architecture


Comments



Description

NSCET-LAB MANUALTheni Melapettai Hindu Nadargal Uravinmurai NADAR SARASWATHI COLLEGE OF ENGINEERING AND TECHNOLOGY (Approved by AICTE, New Delhi & Affiliated to Anna University, Chennai) Vadapudupatti, Annanji(po), Theni – 625 531. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING SUBJECT CODE : CS6712 Grid and Cloud Computing Laboratory PREPARED BY Mr.S.C.Prabanand AP/CSE DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING NSCET-LAB MANUAL CS6712 GRID AND CLOUD COMPUTING LABORATORY OBJECTIVES: The student should be made to:  Be exposed to tool kits for grid and cloud environment.  Be familiar with developing web services/Applications in grid framework  Learn to run virtual machines of different configuration.  Learn to use Hadoop LIST OF EXPERIMENTS: GRID COMPUTING LAB Use Globus Toolkit or equivalent and do the following: 1. Develop a new Web Service for Calculator. 2. Develop new OGSA-compliant Web Service. 3. Using Apache Axis develop a Grid Service. 4. Develop applications using Java or C/C++ Grid APIs 5. Develop secured applications using basic security mechanisms available in Globus Toolkit. 6. Develop a Grid portal, where user can submit a job and get the result. Implement it with and without GRAM concept. CLOUD COMPUTING LAB Use Eucalyptus or Open Nebula or equivalent to set up the cloud and demonstrate. 1. Find procedure to run the virtual machine of different configuration. Check how many virtual machines can be utilized at particular time. 2. Find procedure to attach virtual block to the virtual machine and check whether it holds the data even after the release of the virtual machine. 3. Install a C compiler in the virtual machine and execute a sample program. 4. Show the virtual machine migration based on the certain condition from one node to the other. 5. Find procedure to install storage controller and interact with it. 6. Find procedure to set up the one node Hadoop cluster. 7. Mount the one node Hadoop cluster using FUSE. 8. Write a program to use the API's of Hadoop to interact with it. 9. Write a wordcount program to demonstrate the use of Map and Reduce tasks OUTCOMES: At the end of the course, the student should be able to  Use the grid and cloud tool kits.  Design and implement applications on the Grid.  Design and Implement applications on the Cloud. LIST OF EQUIPMENT FOR A BATCH OF 30 STUDENTS: SOFTWARE: Globus Toolkit or equivalent Eucalyptus or Open Nebula or equivalent HARDWARE Standalone desktops 30 Nos DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING NSCET-LAB MANUAL GRID COMPUTING LAB DEVELOP A NEW WEB SERVICE FOR CALCULATOR EX No:1 Aim: To develop a new Web Service for Calculator using Globus toolkit. Procedure 1. Create new project 2. Select java Empty Web Application 3. Give a name to your project and click ok button 4. Go to Solution Explorer and right click at your project 5. Select Add New Item and select Web Service application 6. Give it name and click ok button Program: package gt3tutorial.core.first.impl; import org.globus.ogsa.impl.ogsi.GridServiceImpl; import gt3tutorial.core.first.Math.MathPortType; import java.rmi.RemoteException; public class MathImpl extends GridServiceImpl implements MathPortType { public MathImpl() { super("Simple Math Service"); } public int add(int a, int b) throws RemoteException { return a + b; } public int subtract(int a, int b) throws RemoteException { return a - b; } public int multiply(int a, int b) throws RemoteException { return a * b; } public float divide(int a, int b) throws RemoteException { return a / b; }} DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING NSCET-LAB MANUAL OUTPUT: Result: Thus the Web Service for Calculator is developed using Globus Toolkit successfully. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING We discuss both past activities that have produced DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING .e. The completion of these two documents leads to the question: what is the path by which OGSA should now be further developed and defined? An answer to this question is important to a variety of people. These and other pressures encourage the view that we must move quickly to fill out the OGSA definition and produce a set of normative specifications that define in great detail what it means to be OGSA compliant. They now want to understand what this framework implies for their work. These issues are particularly important in the case of OGSA. intended as a source of requirements for OGSA services. due to the particularly large gap between our ambition and experience. which seeks to clarify the role of OGSA and the steps required to refine its definition by addressing the following three issues. before rushing into this task. These considerations motivate this document. standardization without adequate experience and/or buy in from its eventual users. we review major relevant standardization activities external to GGF. i. The OGSA Version 1 document collates requirements for an Open Grid Services Architecture and identifies a large number of service interfaces that may be required to meet those requirements..  With a view to identifying external constraints on OGSA.   While the OGSA design team has worked hard and considered a variety of use cases. It would seem likely that there are important perspectives that have not yet been considered. GGF.   The human resources available to work on OGSA activities are small. but must rather be viewed as one (hopefully important) input to a larger process aimed at defining service- oriented solutions to distributed computing problems.   Developers and users want to know “what they can expect when” in terms of standards. The OGSA Use Cases document describes a set of use cases from a range of enterprise and scientific settings. Procedure: The Global Grid Forum (GGF)’s Open Grid Services Architecture working group (OGSA-WG) has recently completed two major work products. the team remains relatively small. certainly far fewer than are needed to do justice to the full spectrum of issues described in OGSA Version 1. we must also be aware of a number of other factors:  The broad importance of Grid and the tight alignment of OGSA with Web services means that further work on OGSA cannot proceed as a purely GGF activity. we need to be acutely sensitive to the dangers of premature standardization. However. and Grid as a whole depends in part on a coherent answer to this question. NSCET-LAB MANUAL EX No:2 OGSA-compliant Web Service Aim: To develop a OGSA-compliant Web Service using Globus Toolkit.  Many GGF participants have bought into the notion that OGSA can serve as an overarching architectural framework for different GGF activities.   As in any standardization process.   Arguably the credibility of OGSA. so that they can make plans for product developments and technology acquisitions. I’d suggest that a key requirement should be identification as a “recommendation” in the sense that there are two or more interoperable implementations. WS-I. Figure 1 may be relevant here.g. we recommend a process by which technical specifications developed within or outside GGF can be identified as meeting OGSA requirements. A key point that Foster wants to see expressed here is that the “top down” and “bottom up” worlds are to be coordinated as follows:  OGSA-WG is concerned with defining requirements and overall architecture: the lighthouse towards which others may steer. and/or constrain.   The steps by which a technical specification may become identified as “OGSA compliant” remains to be clearly defined. policy models/languages (constraints and capabilities). NSCET-LAB MANUAL specifications on which OGSA can build. Products that may be expected from each of these groups in the coming 1-2 years. Workflow (CAF/BEPL). However. W3C. referring to other documents for details of course. Approaches that may be taken to facilitate coordination. provisioning and deployment (Solution Installation Schema. we identify dependencies among different OGSA interfaces and the interfaces that appear needed within different deployment and application profiles. e.   Documents produced. and their relevance to OGSA. OGSA-WG is not in the business of endorsing such activities ahead of time. There are many places where bilateral coordination has worked. Naming. CDDLM). IETF. WSDM/CMM.   WGs within GGF or other bodies may (hopefully will!) be formed to develop specifications that speak to requirements identified by OGSA-WG. and current and planned future activities that may contribute to. Notification and Eventing.  With a view to identifying factors that might help prioritize work on OGSA. minimal functionality. Several low hanging pieces of fruit were identified: Notification. WMX. Code footprint vs. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . The following are a few rough notes that may be relevant. OGSA’s evolution. CML.   With a view to clarifying the process by which OGSA definition may proceed.g. 3 OGSA Definition Process The process by which we see OGSA being further refined. 1 Open Grid Services Architecture: Goals and Status A few brief words on goals and status. OASIS. common data model.  Overarching goals. e. 2 The Standards Landscape The roles of GGF. Reliable Messaging. but there is still a lot of potential for conflict. DMTF. 20 v2…vN Coarser granularity Architecture Roadmap OGSA-WG (Ian.Agreed that relations and dependencies with other groups need to be explicitly spelled out: . * Roadmap discussion .List of what is required and in what order .08. e. Resolved  Roadmap document per design team and a document that links them for OGSA.Agreement that it is ok to continue working on higher levels based on the expection of specs at lower level that will define some needed functionality.) .. but need to do more work before that can be done. which are the low hanging fruits) Proposal: That roadmap should be independent of the dependencies but should include the dependencies) . Document decisions and what they imply and who is doing what.(Essentially milestones.Also to help people looking from the outside on what is OGSA and what is happening . priorities. OGSA document to also describe dependencies between the design teams' roadmaps. NSCET-LAB MANUAL Materials from OGSA F2F 2004. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING .g. Jay) v1…vN-1 … EMS Data Security Design Team (Ravi) (DaveB) (Frank) Normative specs Capability We are here EPSCSG … External standards (e..Identify dependencies and priorities we have received by the community .What we will work with them or defer to them. as long as there is a description of what can be expected of the lower spec (in terms of requirements). . etc. and how easy it is to do things.Create an ordering of the specs. WS specs) that satisfy our requirements Or identify what we’d like to see happen Finer granularity And a detailed chart with stages of individual specs and when we expect they would be available. Hiro.g.   And a one page powerpoint slide that shows the overall map (and dates)   Documents are internal to ogsa-wg. the one page powerpoint slide is public  .Intended audience: need to identify in more detail. (Metadata --.In the case of WSDM we have identitified: .Inter-standards bodies meeting focusing on WS-management (as a common area of interest) to discuss collaboration between standards bodies .(Jem trying to get OGSA v1 comments from HP.Trying to come up with ways to collaborate at a higher level .In particular we should express OGSA priorities and feedback .Postpone further discussion to next call (next week) Result Thus the OGSA-compliant Web Service using Globus Toolkit is developed successfully.Currently only WSDM MUWS/MOWS is the other similar document .) .(Lowest level: specification and implementations) .. Figure 2 is the result of a brainstorming session (& was put to the side on the next day). This is not just liaison work.Comments to Ian: . need a coarser level of detail for the top level roadmap. a joint work/document or some more explicit understanding . .Dave also volunteered to get involved to this effort. NSCET-LAB MANUAL .OGSA v1 (currently in public comment) will hopefully act as milestone/catalyst to push this work forward . .Need to communicate to them our requirements. EMS overlap (or EMS usage of WSDM) --.At the moment all collaboration is at the grassroots level only . someone should 'own' that piece of work. other people are encouraged to do the same within their companies.(Draft roadmap shouldn't have to go into public comment) . Dave talked about the importance of the meeting that Hiro attended last week . Events . DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING .query to data design team) .At the moment we have 2 levels of detail. .g. when work is done in different standards bodies) Action: Jay to talk with Ian and Hiro to form a core design team to work on dependencies at architecture level (and roadmap) Call 2. Naming/identity .as an initial 'application' area .E.The OGSA roadmap could be used as one important input to this discussion .(IP policy issues.Top level roadmap should (also) be in 2-dimensions: time and level of detail .At the OGSA level talk in terms of EMS/Data/Security (and maybe Naming) . . NSCET-LAB MANUAL Ex No: 3 Using Apache Axis develop a Grid Service Aim: To develop a web Service using Apache Axis Webserver Procedure: Create a new project start by creating a new Java Project called ProvisionDirService. Go to Window > Preferences and select the Tomcat > Source Path Select the checkbox next to our ProvisionDirService project Create the GT4 library To create a user library from the GT4 library directory.class files inside our project as soon as they are compiled by Eclipse (which appens every time they are saved). The last step allows Tomcat to reload any updates to the implementation. Click New. Click Next and enter ProvisionDirService in the Project Name textbox. The last step allows Tomcat to reload any updates to the implementation..and select Java > Java Project from the selection wizard..." Doing so enables Tomcat to run from . minor changes to the service logic will be reflected in the running service without having to regenerate or redeploy any GARs. select the Tomcat page. and check the "Is a Tomcat Project" checkbox.SelectFile > New > Project.. This step allows the live debugger to pull up the fresh source when debugging. Open the project properties page shown in Figure 8 (select Properties from the project's pop-up menu). Hence.. in the User Libraries dialog (see Figure 11) and create a Library called GT4 Library. Finish the configuration DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . use the User Libraries. Accept the remaining project creation defaults by clicking Finish. Select the checkbox next to our ProvisionDirService project. Make the project a Tomcat project The first thing we need to do is to make this project a "Tomcat Project. This step allows the live debugger to pull up the fresh source when debugging. Add project source to Tomcat source path Now we need to add the project's source to the Tomcat source path. Add project source to Tomcat source path Now we need to add the project's source to the Tomcat source path. button. Go to Window > Preferences and select the Tomcat > Source Path page (shown in Figure 9). NSCET-LAB MANUAL OUTPUT: Result Thus the Apache Axis webserver to develop a Grid Service using Globus Toolkit is developed successfully. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . The API aligns. The API insulates application developers from middleware. The following two simple examples show how the SAGA job package API can be used to submit a Message Passing Interface (MPI) job to a remote Globus GRAM resource manager. The SAGA specification for distributed computing originally consisted of a single document. Currently the C++ implementation is not under active development. the API seeks to hide the detail of any service infrastructures that may or may not be used to implement the functionality that the application developer needs.hpp> int main (int argc. as well as in their support for distributed middleware. SAGA C++ SAGA C++ was the first complete implementation of the SAGA Core specification. however.NO: 4 Develop Grid API’s using C++ AIM: To write a program for developing Grid API’s using C++. Such developers typically wish to devote their time to their own goals and minimize the time spent coding infrastructure functionality. The specification of services. NSCET-LAB MANUAL EX. and the protocols to interact with them. Rather. GFD. which was released in 2009. The SAGA API does not strive to replace Globus or similar grid computing middleware systems. C++ Program: #include <saga/saga. Apart from the implementation language. with all middleware standards within Open Grid Forum Implementations Since the SAGA interface definitions are not bound to any specific programming language. but application developers with no background on grid computing. written in C++. char** argv) { DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . they differ from each other in their completeness in terms of standard coverage.90. several implementations of the SAGA standards exist in different programming languages. Algorithm: The Simple API for Grid Applications (SAGA) is a family of related standards specified by the Open Grid Forum to define an application programming interface (API) for common distributed computing functionality. is out of the scope of SAGA. and does not target middleware developers. SAGA provides a high-level API called the job package for this. Job submission A typical task in a distributed application is to submit a job to a local or remote distributed resource manager. saga::job::service js("gram://my. "checkpt"). "/home/user/hello.set_attribute (sja::description_error. jd.set_attribute (sja::description_number_of_processes. // Name of the queue we want to use jd. } } Result Thus the program for developing Grid API using C++ was executed successfully DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . try { saga::job::description jd. saga::job::job j = js. "mpi").set_attribute (sja::description_spmd_variation. "mpi"). namespace sja = saga::job::attributes. // Number of processors to request jd. jd. "32").create_job(jd).set_attribute (sja::description_executable. "/home/user/hello.globus.set_attribute (sja::description_queue.set_attribute (sja::description_spmd_variation.err"). NSCET-LAB MANUAL namespace sa = saga::attributes.run() } catch(saga::exception const & e) { std::cerr << "SAGA exception caught: " << e.what() << std::endl. // Declare this as an MPI-style job jd. jd. jd.host/jobmanager- pbs").set_attribute (sja::description_output. j. "/home/user/hello-mpi").out"). DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . import java.EJB.servlet. urlPatterns={"/TutorialServlet"}) public class TutorialServlet extends HttpServlet { @EJB private ConverterBean converterBean. import javax. import java. NSCET-LAB MANUAL EX No: 5 Develop secured applications using basic security in Globus Aim: To develop a Develop secured applications using basic security in Globus Procedure: Authenticating Users Programmatically Servlet 3.ServletException.servlet. @WebServlet(name="TutorialServlet". import javax.servlet. import javax.math.ejb.io.http.   logout  The logout method is provided to allow an application to reset the caller identity of a request.IOException.servlet.HttpServletResponse. The following example code shows how to use the login and logout methods: package test.0 specifies the following methods of the HttpServletRequest interface that enable you to authenticate users for a web application programmatically:  authenticate  The authenticate method allows an application to instigate authentication of the request caller by the container from within an unconstrained request context. import javax. A login dialog box displays and collects the user's name and password for authentication purposes.servlet.http.io. import javax.annotation.http.BigDecimal.   login  The login method allows an application to collect username and password information as an alternative to specifying form-based authentication in an application deployment descriptor. import java.PrintWriter.HttpServletRequest. import javax.HttpServlet. /** * Processes requests for both HTTP <code>GET</code> * and <code>POST</code> methods.WebServlet. HttpServletResponse response) throws ServletException.*.println("<head>").authenticate(response). } } } This code sample shows how to use the authenticate method: package com.println("<body>"). } finally { request.setContentType("text/html.io. import javax. out. out. out.http. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING .println("<title>Servlet TutorialServlet</title>").servlet. out. "TutorialUser"). out.getWriter(). try { request. PrintWriter out = response.getWriter().logout().println("<h1>Servlet TutorialServlet result of dollarToYen= " + result + "</h1>").test.println("</body>").close().dollarToYen(new BigDecimal("1. IOException { response. BigDecimal result = converterBean.println("</html>"). try { out. NSCET-LAB MANUAL * @param request servlet request * @param response servlet response * @throws ServletException if a servlet-specific error occurs * @throws IOException if an I/O error occurs */ protected void processRequest(HttpServletRequest request. out. } catch (Exception e) { throw new ServletException(e). out. out.println("</head>").login("TutorialUser". import javax.charset=UTF- 8"). import java.*. request. PrintWriter out = response. public class TestServlet extends HttpServlet { protected void processRequest(HttpServletRequest request. HttpServletResponse response) throws ServletException.println("Authenticate Successful").setContentType("text/html.charset=UTF- 8"). IOException { response.servlet.println("<html>").0")). out.*.sam. jar.apache. copy the jars to Tomcat's common/lib directory.* classes will not be executed if there are in the webapps/ tree.jar and wsdl4j. cryptix. In Axis alpha 3 this applies to axis.* or javax.http.jar (or other XML parser) are in Tomcat's common/lib  directory. in Axis beta 1 this applies to jaxrpc. iaik_javax_crypto.jar to Tomcat's server/lib directory. Note that a bug in Tomcat means that any jars containing java.catalina.HttpConnector" DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . iaik_ssl. Instead.jar to Tomcat's  common/lib directory.jar and xerces.jar and wsdl4j.jar. Deploy GSI support in Tomcat  Edit Tomcat's conf/server.connector. Installing the software Install Tomcat  Install Tomcat according to the instructions and check that it works. Install libraries to provide GSI support for Tomcat  Copy cog.  Check that log4j-core. Deploy Axis on Tomcat  Install Axis according to the instructions and check that it works. iaik_jce_full.1 Connector on port 8443 o Supported parameters include: o proxy // proxy file for server to use o or o cert // server certificate file in PEM format o key // server key file in PEM format o o cacertdir // directory location containing trusted CA certs o gridMap // grid map file used for authorization of users o debug // "0" is off and "1" and greater for more info o --> o <Connector className="org.jar.  Copy gsicatalina. NSCET-LAB MANUAL } finally { out.xml o Add GSI Connector in <service> section: o <!-.jar.jar.jar.Define a GSI HTTP/1.close(). } } Writing GSI enabled Web Services Install Tomcat according to the instructions and check that it works. jar o xerces.jar  You should also have these jars in your classpath: o gsiaxis.pem" o key="/etc/grid-security/hostkey.HttpConnector" port="8443" minProcessors="5" maxProcessors="75" enableLookups="true" authenticate="true" acceptCount="10" debug="1" scheme="httpg" secure="true"> <Factory className="org.tomcat.net.CertificatesValve" o debug="1" /> o Install libraries to provide GSI support for Axis  Copy gsiaxis.jar  o wsdl4j.globus. make sure the proxy has not expired! o Add a GSI Valve in the <engine> section: o <Valve className="org.catalina.GSIServerSocketFactory" o cert="/etc/grid-security/hostcert.globus. For testing purposes you can use user proxies or certificates instead of host certificates e. Set your CLASSPATH correctly  You should ensure that the following jars from the axis/lib directory are in your  classpath: o axis.pem" o cacertdir="/etc/grid-security/certificates" o gridmap="/etc/grid-security/gridmap-file" o debug="1"/> o </Connector> If you are testing under a user account.valves.jar o cog.jar o log4j-core.catalina.tomcat.jar  o jaxrpc.: <Connector className="org.globus. make sure that the proxy or certificates and keys are readable by Tomcat.connector.g.catalina.catalina.tomcat.apache.jar o commons-logging.GSIServerSocketFactory" proxy="/tmp/x509u_up_neilc" debug="1"/> </Connector> If you do test using user proxies.jar o tt-bytecode.http.jar (or other XML parser) DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING .net.jar to the WEB-INF/lib directory of your Axis installation under Tomcat.jar  o clutil. NSCET-LAB MANUAL o port="8443" minProcessors="5" maxProcessors="75" o enableLookups="true" authenticate="true" o acceptCount="10" debug="1" scheme="httpg" secure="true"> o <Factory className="org. YYYY-MM-DD.YYYY-MM-DD.globus.apache.YYYY-MM-DD.out. System. Alpha 3 version Let's assume we already have a web service called MyService with a single method.example.println("MyService: http request\n"). Library"   catalina_log. System.jar" Writing a GSI enabled Web Service Implementing the service The extensions made to Tomcat allow us to receive credentials through a transport-level security mechanism. myMethod.println("MyService: you sent " + arg). the Axis RPC despatcher will look for the same method. So we can write a new myMethod which takes an additional argument. } // Add a MessageContext argument to the normal method public String myMethod(MessageContext ctx. This can be illustrated in the following example: package org.txt contains a message saying "WebappLoader[/axis]: Deploy JAR /WEB-INF/lib/gsiaxis. import org.out contains messages saying "Welcome to the IAIK ..util.println("GOT PROXY: " + Util. import org. In particular check that:  apache_log.out.out. System.println("MyService: httpg request\n").getCredentials(ctx)).out.txt does not contain any GSI related error messages   catalina.println("MyService: you sent " + arg).txt contains messages saying "HttpConnector[8443] Starting  background thread" and "HttpProcessor[8443][N] Starting background thread"  localhost_log. but with an additional parameter: the MessageContext. When a SOAP message request comes in over the GSI httpg transport.MessageContext. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . String arg) { System. the MessageContext.Util.. NSCET-LAB MANUAL Start the GSI enabled Tomcat/Axis server  Start up Tomcat as normal Check the logs in Tomcat's logs/ directory to ensure the server started correctly. public class MyService { // The "normal" method public String myMethod(String arg) { System.out.axis. return "Hello Web Services World!".globus.axis. and Axis makes them available as part of the MessageContext. Tomcat exposes these credentials. Deploying the service Before the service can be used it must be made available. public class MyService { // Beta 1 version public String myMethod(String arg) { System. This allows the service to extract the proxy credentials from the MessageContext.apache.MyService"/> 6. This can be retrieved by calling MessageCOntext.wsdd file in WEB-INF directory of axis on Tomcat: 3. System.jar is a utility package which includes the getCredentials() method. <service name="MyService" provider="java:RPC"> 4. Instead the Message Context is put on thread local store.example.getCurrentContext(). </service> DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING .out. 2. NSCET-LAB MANUAL return "Hello Web Services World!".globus. This is done by deploying the service.Util.globus. } } Beta 1 version In the Beta 1 version.println("GOT PROXY: " + Util. // Retrieve the context from thread local MessageContext ctx = MessageContext. import org. you don't even need to write a different method.out.getCredentials(ctx)). return "Hello Web Services World!".axis. <parameter name="className" value="org. Add the following entry to the server-config.getCurrentContext(): package org.example. This can be done in a number of ways: 1.out. } } Part of the code provided by ANL in gsiaxis. System.globus. Use the Axis AdminClient to deploy the MyService classes.MessageContext.util.println("MyService: you sent " + arg).axis.println("MyService: httpg request\n"). <parameter name="methodName" value="*"/> 5. import org. import org.transport.client.apache.axis. import org.client. public class Client { public static void main(String [] args) { Util. import org.apache.transport.apache.QName.ParameterMode.example.GlobusProxy. import org.axis.apache.util.axis.apache.SelfAuthorization.configuration.HTTPSender.axis. NSCET-LAB MANUAL Writing a GSI enabled Web Service client As in the previous example. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . try { Options options = new Options(args).Call.XMLType.SimpleProvider.globus.apache.registerTransport().xml.security.apache.rpc. import org.axis.AxisFault.Service.GSIHTTPSender.auth. import org. import javax.globus.axis. import org.axis.namespace.transport.http. There are some additions required to use the new GSI over SSL transport:  Deploy a httpg transport chain   Use the Java CoG kit to load a Globus proxy  Use setProperty() to set GSI specifics in the Axis "Property  Bag": o globus credentials (the proxy certificate) o authorisation type o GSI mode (SSL.globus.GSIHTTPTransport. import org.globus.Options. import org.axis.rpc. import org. no delegation.globus.axis. full delegation. import org.security.apache.axis.globus. import org.Util. this is very similar to writing a normal web services client.utils. import org.axis. import javax.SimpleTargetedChain. limited delegation)  Continue with the normal Axis SOAP service invokation: o Set the target address for the service o Provide the name of the method to be invoked o Pass on any parameters required o Set the type of the returned value o Invoke the service Here's an example which can be used to call the service you wrote in the last section: package org.xml.encoding. URL(endpointURL) ). if ((args == null) || (args. proxy). } // Set up transport handler chains and deploy SimpleProvider provider = new SimpleProvider(). GSIHTTPTransport.getRemainingArgs().setProperty(GSIHTTPTransport. String textToSend. new SelfAuthorization(proxy)). provider. NSCET-LAB MANUAL String endpointURL = options.deployTransport("http". provider. "myMethod")).net.setProperty(GSIHTTPTransport. // Parse the arguments for text to send args = options. // Create a new service call Service service = new Service(provider). // Set the name of the method we're invoking call.getDefaultUserProxy().GSI_MODE. // Set the address of the service (from cmd line arguments) call.GSI_AUTHORIZATION.deployTransport("httpg". } else { textToSend = args[0]. c). // Set authorization type call.GSI_MODE_LIMITED_DELEG).setTargetEndpointAddress( new java. // Set gsi mode call.setOperationName(new QName("MyService". // Setup a target parameter DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . c = new SimpleTargetedChain(new GSIHTTPSender()).createCall(). // Load globus proxy GlobusProxy proxy = GlobusProxy.setProperty(GSIHTTPTransport. SimpleTargetedChain c = null. // Set globus credentials call. c).GSI_CREDENTIALS.getURL(). c = new SimpleTargetedChain(new HTTPSender()).length < 1)) { textToSend = "". Call call = (Call) service. Build the main client application 5.1:8443/axis/servlet/AxisServlet "Hello!" assuming that you are running the client on the same machine (localhost) as the Axis/Tomcat server.0. // Set the return type call.setReturnType( XMLType. XMLType.XSD_STRING ). and that you've installed Axis in the webapps/axis directory of Tomcat. While my example service simply delivers a single specific jar file. Build a custom client-side ClassLoader 4. this service's actual production version would likely have access to multiple jar files (each containing a different computing task). ParameterMode.example.XSD_STRING. The SOAP service provides a way for our grid computing application to pull the classes it needs from the SOAP server.0. I chose Axis as the SOAP services provider because it too is open source. Descriptions of the GSI extensions to Tomcat and Axis 1. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING .printStackTrace().PARAM_MODE_IN).println("MyService returned: " + ret). I chose Tomcat as the servlet container/HTTP server because it is an open source project and proves to be extremely reliable and easy to use. Build a server-side SOAP service using Tomcat and Axis 2.out you should see the messages from the Client received by the service. } else e.globus. and it would contain additional logic to control which JAR was delivered to whom. NSCET-LAB MANUAL call. Build a trivial compute task designed to exercise the client ClassLoader Build the SOAP service The SOAP service I build in this article is the closest thing to a management layer that this framework will have.addParameter( "arg1". // Print out the returned value System. If you examine logs/catalina. // Invoke the method. The first step in providing the SOAP service is to set up the SOAP infrastructure. Create connection stubs to support client-side use of the SOAP service 3.dump(). as well as the proxy credentials.invoke( new Object[] { textToSend } ). } } } You can invoke this client by running: java org.Client -l httpg://127. } catch (Exception e) { if ( e instanceof AxisFault ) { ((AxisFault)e).out. passing in the value of "arg1" String ret = (String) call. This service fetches a known jar file. NSCET-LAB MANUAL supports an easy-to-use drag-and-drop service installer.io.available()].util. import java. fi. loads the file into a byte array. fi. The following code is the entire file GridConnection. try { FileInputStream fi = new FileInputStream("/Users/tkarre/MySquare/build/MySquare.close() . jarBytes = new byte[fi.java: //// GridConnection. public class GridConnection { public byte[] getJarBytes () { byte[] jarBytes = null .0.* . } catch(Exception e) {} return jarBytes .read(jarBytes).6 and Axis 1.jar"). and returns the byte array to the caller. and comes with a tool that creates SOAP client-side stubs from WSDL (Web Services Description Language) files (a feature I exploit later).java // import java. I wrote the SOAP service class GridConnection.*. } } Result Thus the above application basic security in Globus executed successfully. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING .0. After downloading and installing Tomcat 4. batch queue if submitting a batch mode and more. Capabilities include file transfer. The GSIFTPServiceBean provides a session scoped bean that manages multiple FTP connections to GSI enabled FTP servers. a JobInfoBean is created which contains a time stamp of when the job was submitted and other useful information about the job. Tiers represent physical and administrative boundaries between the end user and the web application server. where user can submit a job and get the result. Algorithm: The Grid Portal Development Kit leverages off existing Globus/Grid middleware infrastructure as well as commodity web technology including Java Server Pages and servlets. Job Submission Both interactive and batch queue job submissions are enabled using either the GSI enhance SSH client [] or using the Globus GRAM protocol to submit jobs to Globus gatekeepers deployed on Grid resources. The JobSubmissionBean is actually an abstract class that is subclassed by the GramSubmissionBean in the case of submitting a job to a Globus gatekeeper or a GSISSHSubmissionBean idf using the GSI enhanced SSH client. Present the design and architecture of GPDK as well as a discussion on the portal building capabilities of GPDK allowing application developers to build customized portals more effectively by reusing common core services provided by GPDK. the JobSubmissionBean and the JobInfoBean. including a GRAM URL that can be used to query on the status of the job. NSCET-LAB MANUAL EX No: 6 Develop a Grid portal. The Grid Portal Development Kit The Grid Portal Development Kit is based on the standard n-tier architecture adopted by most web application servers as shown in Figure 1. The major GPDK components used to submit jobs are the JobBean. Once a job has been succesfully submitted. The GSIFTPServiceBean allows users to browse multiple GSI FTP servers simultaneously and a separate thread monitors server timeouts. The client tier is represented as tier 1 and consists of the end-user’s workstation running a web browser. including third- party file transfer between GSI enabled FTP servers. The JobBean provides a description of the job to be submitted. The DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . The FileTransferBean provides a generic file transfer API that is extended by the GSIFTPTransferBean and the GSISCPTransferBean. It includes methods for setting and returning values for the executable. as well as file browsing capabilities. File Transfer Data access capabilities are provided by the GridFTP [] API implemented as part of the CoG toolkit and encapsulated into core GPDK service beans. and cookies to allow session data to be transferred between the client and the web application server. The only requirements placed upon the client tier is a secure (SSL-capable) web browser that supports DHTML/Javascript for improved interactivity. The GramSubmissionBean capabilities are provided once again by the Java CoG library. additional arguments passed to the executable. an encapsulation of file transfer via the GSI enhanced scp coommand tool. Implement it with and without GRAM concept AIM: To Develop a Grid portal it with and without GRAM concept. number of processors for parallel jobs. Information Services The Grid Forum Information Services working group has proposed the Grid Information Services (GIS) architecture for deploying information services on the Grid and supported the Lightweight Directory Access Protocol (LDAP) as the communication protocol used to query information services. number of processors and other details as well as cpu load and queue information that can be used by the user to make more effective job scheduling decisions. Currently GDK supports querying the MDS for hardware information such as CPU type. thus eliminating the need for clients to reconnect during each query. However. OUTPUT: Result: Thus the above Grid portal application is executed successfully. NSCET-LAB MANUAL GSIFTPViewBean is an example view bean used by a JSP to display the results of browsing a remote GSI FTP server. an open source LDAP server. GPDK provides an MDSQueryBean and MDSResultsBean for querying and formatting results obtained from the MDS. The Globus toolkit provides a Metacomputing Directory Service (MDS). Information services on the Grid are useful for obtaining both static and dynamic information on software and hardware resources. the Java CoG toolkit provides support for LDAP using the Java Naming and Directory Interface (JNDI). this model will need to be re-evaluated with the widespread deployment of the MDS-2 architecture which includes GSI enhancements making it necessary for clients to reauthenticate to the MDS for each query. Although. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . GPDK uses the open source Netscape/Mozilla Directory SDK [] as it proved easier to use in practice and also provides support for developing a connection pool for maintaining multiple connections to several Grid Information service providers. which is an implementation of a Grid Information Service using OpenLDAP. Kernel-based Virtual Machine (KVM) is a virtualization infrastructure for the Linux kernel that turns it into a hypervisor. Steps for KVM Installation: 7. KVM: In computing. To run KVM. egrep -c ' lm ' /proc/cpuinfo If 0 .printed. : 7 INSTALLATION OF VIRTUAL MACHINE Date: Aim: To find procedure to run the virtual machine of different configuration. virtualization refers to the act of creating a virtual (rather than actual) version of something. Check how many virtual machines can be utilized at particular time.CPU doesn't support hardware virtualization 1 . you need a processor that supports hardware virtualization. and computer network resources. operating systems. 1. So check that your CPU supports hardware virtualization egrep -c '(vmx|svm)' /proc/cpuinfo If 0 .  $ ls /lib/modules/3.ko kvm.16. it means that your CPU is not 64-bit.0-3 generic/kernel/arch/x86/kvm/kvm kvm-amd. qemu-kvm libvirt-bin bridge-utils virt-manager qemu-system DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING .ko  $ ls /dev/kvm /dev/kvm  Install Necessary Packages using the following commands.CPU support hardware virtualization  To see if your processor is 64-bit.ko kvm-intel. storage devices.is 64-bit. including virtual computer hardware platforms. NO. NSCET-LAB MANUAL EX. 04.iso --vnc --noautoconsole --os-type linux --os-variant ubuntuHardy Output: 1. New virtual machine is created using KVM: DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING .2-server-amd64. Creating VM’s virt-install --connect qemu:///system -n hardy -r 512 -f hardy1. NSCET-LAB MANUAL 6.qcow2 -s 12 -c ubuntu- 14. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . NSCET-LAB MANUAL Conclusion: Thus the virtual machine of different configuration is created successfully. NSCET-LAB MANUAL EX.c –o output 4.  Compile the C program using the compiler installed. : 8 INSTALLATION OF C COMPILER Date: Aim: To find the procedure to install a C Compiler in the Virtual Machine and execute a C program. Conclusion: Thus the C Compiler is installed successfully and executed a sample C program. install the following package. NO. Steps:  To install the C Compiler in the guest os. gcc  Write a sample program using gedit/vim editor. gcc sample_c_program. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . Run the object file and get the output. virtualization refers to the act of creating a virtual (rather than actual) version of something. egrep -c ' lm ' /proc/cpuinfo If 0 . So check that your CPU supports hardware virtualization egrep -c '(vmx|svm)' /proc/cpuinfo If 0 . KVM: In computing. you need a processor that supports hardware virtualization.is 64-bit.16.CPU doesn't support hardware virtualization 1 . * $ ls /lib/modules/3. Steps for KVM Installation:  To run KVM. : 9 INSTALLATION OF VIRTUAL MACHINE Date: Aim: To find procedure to install storage controller and interact with it. NO. and computer network resources.ko * $ ls /dev/kvm /dev/kvm 5.ko kvm. qemu-kvm libvirt-bin bridge-utils virt-manager qemu-system DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . it means that your CPU is not 64-bit. NSCET-LAB MANUAL EX.printed. Install Necessary Packages using the following commands. Kernel-based Virtual Machine (KVM) is a virtualization infrastructure for the Linux kernel that turns it into a hypervisor.0-3 generic/kernel/arch/x86/kvm/kvm kvm-amd. operating systems.ko kvm-intel. storage devices.CPU support hardware virtualization  To see if your processor is 64-bit. including virtual computer hardware platforms. 2. 2-server-amd64. Creating VM’s virt-install --connect qemu:///system -n hardy -r 512 -f hardy1. New virtual machine is created using KVM: DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING .iso --vnc --noautoconsole --os-type linux --os-variant ubuntuHardy Output: 2.qcow2 -s 12 -c ubuntu- 14.04. NSCET-LAB MANUAL . DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . NSCET-LAB MANUAL Conclusion: Thus the storage controller is inatlled successfully in virtual machine. : 10 VIRTUAL MACHINE MIGRATION Date: Aim: To show the virtual machine migration based on the certain condition from one node to the other. Open virt-manager 2. NSCET-LAB MANUAL EX. Connect to the target host physical machine Connect to the target host physical machine by clicking on the File menu. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . NO. Steps to Migrate the Virtual Machine: 1. then click Add Connection. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . An SSH connection is used in this example. so the specified user's password must be entered in the next step. Enter the following details: Hypervisor: Select QEMU/KVM. Hostname: Enter the hostname/IP address for the remote host physical machine. Add connection The Add Connection window appears. Click the Connect button. NSCET-LAB MANUAL 3. Username: Enter the username for the remote host physical machine. Method: Select the connection method. use the drop-down list to select the host physical machine you wish to migrate the guest virtual machine to and click Migrate. NSCET-LAB MANUAL 4. In the New Host field. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . Migrate guest virtual machines Open the list of guests inside the source host physical machine (click the small triangle on the left of the host name) and right click on the guest that is to be migrated (guest1-rhel6-64 in this example) and click Migrate. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . NSCET-LAB MANUAL A progress window will appear. The guest virtual machine that was running in the source host physical machine is now listed in the Shutoff state. virt-manager now displays the newly migrated guest virtual machine running in the destination host. Conclusion: Thus the virtual machine is migrated from one node to another node successfully. Select your datastore from the provided list and then click OK. . Choose Thick Provision Lazy Zeroed. This may take some time. Click Finish to proceed with adding the disk.: 11 VIRTUAL BLOCK ATTACHMENT Date: Aim: To find the procedure to attach virtual block to the virtual machine and check whether it holds the data even after the release of the virtual machine.) . Select your VM and then click Edit settings. . . . Click Next to accept the default advanced options. . . . Choose Specify a datastore or datastore cluster: and then click Browse . depending on how much storage you're adding. NO. the new disk will be included in full VM snapshots. Choose Create a new virtual disk. . we recommend that you leave the Independent option unselected. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . Conclusion: Thus the new virtual block is successfully added to existing virtual machine. . Select Hard Disk from the list of device types and then click Next. Specify the disk size. Steps: . (By default. Select the Hardware tab and then click Add. To keep them consistent. Make sure that you have shut down your virtual machine. Click OK once the new hard disk has been added. NSCET-LAB MANUAL EX. 8. Hadoop YARN – a resource-management platform responsible for managing compute resources in clusters and using them for scheduling of users' applications. “java –version” .: 12 HADOOP SETUP AND INSTALLATION Date: Aim: To find the procedure to set up the one node Hadoop cluster. Installation Steps: . Install Java Check the Java version in the system.0. NSCET-LAB MANUAL EX. NO.0_05 #--in PATH variable just append at the end of the line PATH=$PATH:$JAVA_HOME/bin #--Append JAVA_HOME at end of the export statement export PATH JAVA_HOME $ source /etc/profile  Install SSH using the command DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . Open the “/etc/profile” file and Add the following line as per the version to set a environment for Java $ sudo vi /etc/profile #--insert JAVA_HOME JAVA_HOME=/opt/jdk1. Hadoop is an Apache top-level project being built and used by a global community of contributors and users. . Hadoop Distributed File System (HDFS) – a distributed file-system that stores data on commodity machines. Hadoop Common – contains libraries and utilities needed by other Hadoop modules . The Apache Hadoop framework is composed of the following modules: . It is licensed under the Apache License 2. providing very high aggregate bandwidth across the cluster. Hadoop MapReduce – a programming model for large scale data processing. . HADOOP: Apache Hadoop is an open-source software framework for storage and large-scale processing of data-sets on clusters of commodity hardware. Extract(untar) the downloaded file from commands $ sudo tar zxvf hadoop-2.7. hadoop path to the Hadoop environment file $HADOOP_PREFIX/etc/Hadoop $ vi core-site.0.gz $ cd hadoop-2.xml <property> <name>fs.7.x) from the official site .7. Then Enable password-less SSH access $ ssh localhost $ ssh-keygen $ exit * Hadoop installation: Download the tar.xml Paste following between <configuration> tags in core-site.defaultFS</name> <value>hdfs://localhost:9000</value> </property> DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING .7. NSCET-LAB MANUAL $ sudo apt-get install openssh-server openssh-client * Generate an SSH key for the user.0/  Update Hadoop environment variable in /etc/profile $ sudo vi /etc/profile #--insert HADOOP_PREFIX HADOOP_PREFIX=/opt/hadoop-2.0 #--in PATH variable just append at the end of the line PATH=$PATH:$HADOOP_PREFIX/bin #--Append HADOOP_PREFIX at end of the export statement export PATH JAVA_HOME HADOOP_PREFIX Source the /etc/profile $ source /etc/profile Verify Hadoop installation $ cd $HADOOP_PREFIX $ bin/hadoop version  Update Java.gz file of latest version Hadoop ( hadoop-2.tar. sh DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING .replication</name> <value>1</value> </property> $ cp mapred-site.name</name> <value>yarn</value> </property> $vi yarn-site.template mapred-site.xml <property> <name>dfs.nodemanager.xml.xml <property> <name>mapreduce.sh $ sbin/stop-yarn. NSCET-LAB MANUAL $ vi hdfs-site.framework.xml <property> <name>yarn.sh  Start ResourceManager daemon and NodeManager daemon: (port 8088) $ sbin/start-yarn.xml Paste following between <configuration> tags in mapred-site.xml Paste following between <configuration> tags in yarn-site.xml $ vi mapred-site.xml Paste following between <configuration> tags in hdfs-site.aux-services</name> <value>mapreduce_shuffle</value> </property>  Formatting the HDFS file-system via the NameNode $bin/hadoop namenode –format  Start NameNode daemon and DataNode daemon: (port 50070) $ sbin/start-dfs.sh  To stop the running process $ sbin/stop-dfs. NSCET-LAB MANUAL Output: Hadoop installation: Create the HDFS directories: Conclusion: Thus the one node Hadoop cluster is installed successfully. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . 0_all. NO. NSCET-LAB MANUAL EX. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . $ sudo apt-get update Install the hadoop-fuse. $ sudo dpkg -i cdh3-repository_1. $ wget http://archive.cloudera. $ sudo apt-get install hadoop-0.deb Add the cdh3 repository to default system repository.0_all.deb Update the package information using the following command.20-fuse Once fuse-dfs is installed.: 13 HADOOP CLUSTER USING FUSE Date: Aim: To mount the one node Hadoop cluster using FUSE. Steps: Download the cdh3 repository from the internet.com/one-click-install/maverick/cdh3-repository_1. go ahead and mount HDFS using FUSE as follows: $ sudo hadoop-fuse-dfs dfs://<name_node_hostname>:<namenode_port> <mount_point> Conclusion: Thus the one node Hadoop cluster is mounted using FUSE successfully. hadoop. import org.hadoop. "Reduce" step: Worker nodes now process each group of output data. import org.apache. import org. Text. distributed algorithm on a cluster. public class WordCount { public static class TokenizerMapper extends Mapper<Object. NO.fs.Mapper. Steps: Source Code: import java.IntWritable. import org.Configuration. import org.hadoop.Reducer. import org.hadoop.FileOutputFormat.apache. A MapReduce program is composed of a Map() procedure that performs filtering and sorting and a Reduce() method that performs a summary operation.IOException.apache.io. import org.apache. and writes the output to a temporary storage.mapreduce. Text.: 14 MAP AND REDUCE – WORD COUNT Date: Aim: To write a word count program to demonstrate the use of Map and Reduce tasks.hadoop. NSCET-LAB MANUAL EX.conf.lib. such that all data belonging to one key is located on the same worker node. Mapreduce: MapReduce is a programming model and an associated implementation for processing and generating large data sets with a parallel.StringTokenizer. "Shuffle" step: Worker nodes redistribute data based on the output keys (produced by the "map()" function).output. import org. import java.input. IntWritable>{ DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING .Text.io.hadoop.mapreduce. A master node ensures that only one copy of redundant input data is processed.apache.util.mapreduce.lib.apache.Path. import org.mapreduce.apache.hadoop. "Map" step: Each worker node applies the "map()" function to the local data.Job. per key.apache.io.mapreduce.FileInputFormat.hadoop. in parallel.hadoop.apache. context.write(key.class).IntWritable.Text.nextToken()).setOutputValueClass(IntWritable.setOutputPath(job. job. one).setMapperClass(TokenizerMapper. InterruptedException { int sum = 0.class).class). for (IntWritable val : values) { sum += val.IntWritable> { private IntWritable result = new IntWritable().class).addInputPath(job.class). job.set(sum). job. FileOutputFormat. job. Context context ) throws IOException.class). } result.getInstance(conf. FileInputFormat. job.setOutputKeyClass(Text. } } } public static class IntSumReducer extends Reducer<Text.get().toString()). "word count"). new Path(args[1])). context. new Path(args[0])). result).setCombinerClass(IntSumReducer.setJarByClass(WordCount. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . InterruptedException { StringTokenizer itr = new StringTokenizer(value. public void reduce(Text key.write(word.set(itr. public void map(Object key. private Text word = new Text(). job. NSCET-LAB MANUAL private final static IntWritable one = new IntWritable(1). Job job = Job. Iterable<IntWritable> values.setReducerClass(IntSumReducer. Text value. } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration().hasMoreTokens()) { word. while (itr. Context context ) throws IOException. jar WordCount /user/joe/wordcount/input /user/joe/wordcount/output Output: $ bin/hadoop fs -cat /user/joe/wordcount/output/part-r-00000` Bye 1 Goodbye 1 Hadoop 2 Hello 2 World 2` Conclusion: Thus the word count program to demonstrate the Map and Reduce task is done successfully.java $ jar cf wc.class 3.sun.waitForCompletion(true) ? 0 : 1).tools.Main WordCount.jar 2.jar WordCount*.javac. Run the Application $ bin/hadoop jar wc. NSCET-LAB MANUAL System. Compile the source file to jar file. Set Environmental Variables: export JAVA_HOME=/usr/java/default export PATH=${JAVA_HOME}/bin:${PATH} export HADOOP_CLASSPATH=${JAVA_HOME}/lib/tools. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING . $ bin/hadoop com.exit(job. } } 1. NSCET-LAB MANUAL EX.5.jar wordcount /input /output1 p Examine the output files: View the output files on the distributed filesystem: $ bin/hdfs dfs -cat /output/* DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING .sh p Make the HDFS directories required to execute MapReduce jobs: $ bin/hdfs dfs -mkdir /user q Copy the input files into the distributed filesystem: $ bin/hdfs dfs -put <input-path>/* /input Run some of the examples provided: $ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce- examples-2. Steps:  Start NameNode daemon and DataNode daemon: (port 50070) $ sbin/start-dfs. NO.1.: 15 API’S OF HADOOP Date: Aim: To write a program to use the API's of Hadoop to interact with it.sh  Start ResourceManager daemon and NodeManager daemon: (port 8088) $ sbin/start-yarn. NSCET-LAB MANUAL Conclusion: Thus the program to use the API of Hadoop is implemented successfully. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING .
Copyright © 2024 DOKUMEN.SITE Inc.