jueves, 17 de mayo de 2012

Contributions of the week (week 15)

During this week I added to the wiki some information about subversion I got from Saul's blog. He told me he haven't added it to the wiki yet and since it was in spanish I helped him with the english translation. I added this information because I think it's very useful.



Nominations:
Saul Gausin
Cecy Urbina
Carmen Suarez


jueves, 10 de mayo de 2012

Contributions of the week (week 14)

This week I included some information in the wiki related to LS-DYNA, a program used by the NASA that can simulate complex real world problems. It has a lot of applications and can be used by the automobile, aerospace, construction, military, manufacturing and bioengineering industries. I think it's very useful.





Nominations:
Roberto Martinez
Carmen Suarez
Juan Carlos Espinosa

viernes, 4 de mayo de 2012

lunes, 30 de abril de 2012

Contribution of the week (week 12)

During this week I included in the wiki information about Parallel programming on xbox 360.

We can use this applications for Cardiac Research, I also added some information about how this can be done. 

The Xbox 360 system has a single chip (with 165 million transistors) for its CPU. This chip is in fact a three-way symmetric multiprocessor design.

With the help of this chip, one can model how electrical excitations in the heart move around damaged cardiac cells. This could be useful to investigate or even predict cardiac arrhythmias (abnormal electrical activity in the heart which can lead to a heart attack).





Nominations:

Cecy: Parallel computing in chemistry

Carmen: Open source high-throughput software for distributed parallelization of computationally intensive tasks.

Alex V: Calls in CUDA Kernel




domingo, 29 de abril de 2012

List of words

Here I'm going to write the definition of some words:

Resolve: To reach a decision or make a determination or decision.

Routing:
It's the process of selecting paths in a network along which to send network traffic.

Bottleneck:
It's a phenomenon where the performance or capacity of an entire system is limited by a single or limited number of components or resources.

Discrete:
The opposite of continuous: something that is separate; distinct; individual.

Locate: To determine or specify the position or limits

Protocol: Set of rules used by computers for communicate one to another through a network by message exchange

Physical: Hardware, something that can be touched.

Client: It's an application or system that accesses a service made available by a server

jueves, 19 de abril de 2012

Contributions of the week (week 11)

This week I helped with the organization  and translation of the applications and main page of the wiki.

I also added some information about LAAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator). It is a molecular dynamics simulation code designed to run on parallel computers.

Here is the link to the wiki of LAAMMPS.

I added the description of some of the features and information about the installation.


My nominations of this week are for david sosa because of the information he added of the MPI in python.
I also nominate abraham because of his idea of the web crawler.




jueves, 29 de marzo de 2012

Contributions (week 8)

This week I made a contribution about how supercomputers can solve real life problems. I made some investigations and I found out that they are making huge contributions to medicine. That's why I thought it would be a good idea to solve a medicine problem as our project.

Here is the link to the wiki



My nominations of this week are for Roberto and for Carmen, because they gave us good examples of applications that we can realize in our project.

jueves, 22 de marzo de 2012

First meeting: Scheduling

On monday me, my classmates and my teacher had a meeting where we talked about the project we are making for this class. We also planned how we are going to divide the work for the next two weeks.
What we want to do is to make a cluster in order to work in parallel with different computers.
We all agreed to work on Ubuntu 10.04 64 bits and to create the same username and password (Sorry, I'm not writing them here...). The password creation took a while because we wanted a safe password that is not easily hacked.
When we finished thinking about the basic requirements for the cluster we thought about the tools and the languages we were going to use:
pyMPI and OOMPI.

After that we made a calendar with the activities we are going to make in order to finish our project. Here is the calendar:


As we can observe, first we are going to work in the configuration and installation of the tools we need to make the cluster work. Then we are going to actually work in the cluster and VPN. When we finish, we are going to run some tests to prove if it really works.

I'm going to write a report about each one of the next meetings we are having.

I added this report to the wiki:

http://elisa.dyndns-web.com/progra/Cluster




lunes, 12 de marzo de 2012

Py-par

Pypar is a useful tool that allows programs written in python to run in parallel on multiple processors and communicate using message passing.


Requirements:
  • Python
  • numpy
  • C compiler
  • MPI C library (such as openmpi)
To verify if the installation was correct we must write:

mpirun -np 2 testpypar.py

It must try to run a C/MPI program.

Here is an example program written in pypar

import pypar # The Python-MPI interface                                                                                                                 
numproc = pypar.size()
myid = pypar.rank()
node = pypar.get_processor_name()

print "I am proc %d of %d on node %s" %(myid, numproc, node)

if numproc < 2:
  print "Demo must run on at least 2 processors to continue"
  pypar.abort()

if myid == 0:  
  msg = "MSGP0"

  print 'Processor 0 sending message "%s" to processor %d' %(msg, 1)
  pypar.send(msg, 1)

  msg, status = pypar.receive(numproc-1, return_status=True)
  print 'Processor 0 received message "%s" from processor %d' %(msg, numproc-1)
  print 'Size of msg was %d bytes' %(status.bytes())

else:
  source = myid-1
  destination = (myid+1)%numproc

  msg, status = pypar.receive(source, return_status=True)
  print 'Processor %d received message "%s" from processor %d'\
     %(myid, msg, source)
  print 'Size of msg was %d bytes' %(status.bytes())

  msg = msg + '->P' + str(myid) #Update message                                                                                                            
  print 'Processor %d sending msg "%s" to %d' %(myid, msg, destination)
  pypar.send(msg, destination)

pypar.finalize()

This program sends a message to processor 0. The message is traveling through the processors in the form of a ring, where each one of the processors is adding a bit to the message, then it returns to processor 0.
In order to know what programs are running on the processors we can make the call myid = pypar.rank(). To obtain the total number of processors we write proc = pypar.size()

In order to start running py-par you can write:

mpirun -np 4 demo.py

This command will run 4 different copies of the program in different processors

My nominations of this week are for rafa, cecy and roberto.


Bibliography


Py-par documentation
Py-par: Parallel programming for python



jueves, 1 de marzo de 2012

List of words

Here I'm going to include some common words related to computer programming.

1. Transport
The transport layer is one of the seven layres of the OSI Model. It is the one that controls all the data flow and provides the delivery of data.

2. Message
A message is an object of communication that provides information

3. Disk
A disk is used to store and transport information.
 
4. Simulation
It is an imitation of the operation of a real world system

5. Resource
A resource is a component that is used with limited availability in a computer system. 

6. Object
An object is characterized because it has:
Attributes
Methods
Identities
 
7. Connection
It is a link between two computers, systems, networks, etc that allows the flow of information.

8. Clock
A clock is a device that measures and indicates time

 

Contributions of the week (Week 5)

During this week I made some contributions to the wiki of how an MPI is used in python and what do you need to do if you want it to run. I also added some code of programs using MPI.





My nominations are for Roberto, Juan Carlos and Rafa.

jueves, 23 de febrero de 2012

Contributions of the week (Week 4)

During this week I made some contributions in the applications and cluster part of the project. I consider very important to know the things we can do.
I added some information about finding passwords and encryption keys using a supercomputer here



I also contributed with some instructions of how to configure the cluster to help all users (either normal or root) to navigate it here


jueves, 16 de febrero de 2012

Contributions of the week (week 3)

This week I did a lot of contributions to the wiki:

I added a series of steps people must follow if they want to compile CUDA and to see some of the example code of all the folders and files of CUDA.
What I added was also helpful in the installation of the necessary tools that make CUDA work correctly in a computer.
This is very important because people can now see and modify the example code of CUDA and they can understand how it works.

Here is the link to the wiki



My nominations:

Roberto Martinez
Ramon Esteban
Juan Carlos Espinoza

jueves, 9 de febrero de 2012

CUDA


CUDA is a programming model and parallel computing platform invented by NVIDIA. It works increasing the power of the GPU (Graphics Processing Unit) and it can help increase dramatically the performance of the computer or device where it is installed. It is useful in simulation models, it helps them to be more realistic and exact. Because it is useful in simulation models it can help to reduce the cost of an important project. It has a lot of uses, here I'm going to describe some of them:

1. Blood flow simulation and identification of hidden plaque in arteries
2. Reduction of analysis time of air traffic flow
3. Visualization and large performance boost of nanoscale molecular dynamics (NAMD)
4. Image Analysis and Video Forensics
5. Matlab GPU computing

Almost every major consumer video application has been accelerated by CUDA (Applications such as products from Adobe, Sony, MotionDSP, etc)

It has also been helpful in scientific research because it can accelerate molecules from a simulation program and it can help in new discoveries. It is used by many reserchers, doctors and pharmaceutics over the world.
It is also useful in the financial market, with applications such as Numerix and CompatiblL. It is used in many financial institutions.
 Because of CUDA there are now a large amount of GPU clusters installed around the world, and many different companies are using it.
GPU computing is fully supported by all major operating systems.
Bibliography

Contributions of the week (week 2)

During this week I made some posts about the installation of the Hamachi application for those users having mac:

Why hamachi?
Hamachi is a tool that can help us to simulate a Local Area Network (LAN). It lets you securely extend LAN-like networks to distributed teams and workers, therefore, it may be useful because it can help us to work in teams without necessarily being on the same place.

Here is a link to the wiki where I added the information

And here are the contributions I made for the wiki:



I also helped with the translation of some things in this page related to cuda and hamachi.

Because there are a lot of students that have a mac computer, I considered important for them to know how to install it.

Nominations of the week:

Juan Carlos (Because he added the principal information of hamachi to the wiki)
Cecilia (Because she made a program that created recursive threads and that simulated the infection of a virus)

Bibliography:


lunes, 30 de enero de 2012

Contributions of the week

During this week I made some contributions to the wiki we share in the class.

First I wrote a little introduction with a definition of a supercomputer


I also added information to the applications part, where I explained how they are used in real life in order to give them an idea and to help them know what to do with the project.


At the end of the wiki, I added a place where people can write useful links that may help us to do a better investigation and to improve the project. I only wrote two but the idea is that more people write more links


Next week I will be posting more contributions. I want to make more contributions than I did this week.I hope you find this useful


Some applications of supercomputers


With the help of supercomputers, scientists now can study a wide variety of complex problems. Supercomputers have a lot of applications almost in any area:

Computer modeling and simulation

One of the applications is related to computer modeling and simulation. It can be used to speed up extremely slow processes in order to predict problems. Supercomputers can also help in molecular dynamics simulations.

NASA uses supercomputers to perform simulations:

A team of modeling and simulation experts in the NASA Advanced Supercomputing (NAS) Division is performing advanced aerodynamic simulations that supply critical design performance data more efficiently and accurately than ever before. Using NASA-developed computational fluid dynamics (CFD) codes and supercomputers at the NAS facility, the team is modeling new launch vehicle designs and computing the detailed aerodynamic flows, forces, and interactions that could affect flight performance and safety during launch (1)


Weather forecasting

They can also be useful in weather forecasting, in order to understand the internal conditions that bring about severe weather phenomena.
Also researchers may use them to conduct real-time weather predictions and it can make weather modeling more accurate.


Environmental Studies

It can also be helpful to improve the realism of environmental models. A supercomputer can also help to investigate the behaviour of things that might be too small to examine by any physical means, scientists can also use it to make three dimensional models of chemical compounds or microscopical living things.


Cloud Computing

Amazon has one of the world’s fastest supercomputers, but it’s in the cloud, it is not stored in the way other supercomputers are in a huge room filled with processors and storage. It may not replace the original supercomputers because its performance is not the same and it is slower but it is a very good option for some researchers.

As I said before, supercomputers can also perform many other tasks that can help scientists' investigations easier and may contribute to the evolution of science and technology advances

Bibliography




Supercomputers


During this week I gathered some information related to the use of supercomputers and what tasks they are capable to perform.

We all know what a microcomputer is, most of us have one. A Supercomputer is much more powerfull than a microcomputer. Supercomputers are the fastest type of computers that exist, they are build in order to perform several tasks that require immense amounts of calculations (quantum physics, weather forecasting, climate research, oli and gas exploration, as well as molecular modelling, it can also perform simulations such as airplanes in wind tunnels simulations or the detonation of nuclear weapons, structural analysis, computational fluid dynamics, chemistry, electronic design, among many other uses

Now I’m going to describe some of the most powerfull computers of the world and what are they used for. I gathered this information from top 500. This page contains information about the top 500 supercomputers. Here I’m only going to mention three.


1. The K computer is ranked the world's fastest supercomputer, with a rating of almost 10 petaflops. It uses 88,128 2.0GHz 8-core SPARC64 VIIIfx processors packed in 864 cabinets, for a total of 705,024 cores, manufactured by Fujitsu with 45 nm CMOS technology. It has a memory of 1410048 Gb, and a power of 12659.89 Kw.


2. The Tianhe-1 system is composed of 112 computer cabinets, 12 storage cabinets, 6 communications cabinets, and 8 I/O cabinets. Each computer cabinet is composed of four frames, with each frame containing eight blades, plus a 16-port switching board. Each blade is composed of two computer nodes, with each computer node containing two Xeon X5670 6-core processors and one Nvidia M2050 GPU processor. The system has 3584 total blades containing 7168 GPUs, and 14,336 CPUs, managed by the SLURM job scheduler.The total disk storage of the systems is 2 Petabytes implemented as a Lustre clustered file system,and the total memory size of the system is 262 Terabytes.


3. The Jaguar is manufactured by Cray Inc. It has a peak performance of just over 1,750 teraflops (1.75 petaflops). It has 224,256 x86-based AMD Opteron processor cores. Jaguar is a Cray XT5 system, a development from the Cray XT4 supercomputer.
aguar's XT5 partition contains 18,688 compute nodes in addition to dedicated login/service nodes. Each XT5 compute node contains dual hex-core AMD Opteron 2435 (Istanbul) processors and 16 GB of memory. Jaguar's XT4 partition contains 7,832 compute nodes in addition to dedicated login/service nodes. Each XT4 compute node contains a quad-core AMD Opteron 1354 (Budapest) processor and 8 GB of memory. Total combined memory amounts to over 360 terabytes (TB).

If you want to know more about this type of computers you may want to check this page.

Bibliography