Skip to main content

SC&IS : High Performance Computing Facility (HPCF)

The_University_High_Performance_Computing_Facility_at_Hall_No.7,_in_the_School_of_Information_Technology,_was_funded_from_the_UGC_UPOE_program,_and_envisaged_as_an_important_tool_to_enable_us_to_raise_the_level_of_our_research_to_remain_academically_competitive,_especially_for_research_problems_which_involved_large_data_sets,_and_numerical_calculations._The_facility_has_recently_been_upgraded_with_a_256-processor_Sun_cluster._The_HPCF_Centre_is_conceived_of_as_a_functionally_distributed_supercomputing_environment,_housing_leading-edge_computing_systems,_with_sophisticated_software_packages,_and_connected_by_a_powerful_high-speed_fibre-optic_network._The_computing_facilities_are_connected_to_the_campus_LAN,_WLAN_and_also_to_the_Internet._UPOE_HPC_Cluster_has_been_installed_and_maintained_by_C-DAC._Its_peak_performance_is_1.3_Teraflops_and_the_technology_which_is_used_to_build_this_cluster_is_ROCKS_version_5.2_and_the_scheduler_used_is_Sun_Grid_Engine,_by_using_this_scheduler_we_can_manage_the_user_applications_and_we_can_implement_the_policies,_it_is_a_very_powerful_tool._In_order_to_provide_the_low_latency_we_have_used_the_separate_switches_for_MPI,_Storage_and_IPMI._In_this_UPOE_cluster_we_have_attached_the_storage_model:_storagetek_5220And_the_total_avail_storage_is_4TB.

Brief_Architectural_information:

  • Processor_:_AMD_OPETRON_2218_DUAL_CORE_DUAL_SOCKET
  • NO._of_Master_Nodes_:_1
  • NO._of_Computing_Nodes_:_64
  • Operating_System_:_CENT_OS_5.3
  • CLUSTER_Software_:_ROCKS_version_5.2
  • SERVER_Model_:_SUNFIRE_X4200_(1_NO)
  • Compute_Node_Model_:_SUNFIRE_X2200_(64_NO)
  • NAS_Appliance_Model_:_Storage_tek_5220
  • Total_Peak_Performance_:_1.3_T._F

Calculation_procedure_for_peak_performance:

  • No_of_nodes_64
  • Memory_RAM_4_GB
  • Hard_Disk_Capacity/each_node_:_250GB
  • Storage_Cap._4_TB
  • No_.of_processors_and_cores:_2_X_2_=_4(dual_core_+_dual_socket)
  • CPU_speed_:_2.6_GHz
  • No._of_floating_point_operations_per_seconds_for_AMD_processor:_2_(since_it_is_a_dual_core)
  • Total_peak_performance_:_No_of_nodes_X_No_.of_processors_and_cores_X_Cpu_speed_X_No_of_floating_point_operations_per_second_=_64_X_4_X_2.6GHz_X_2_=_1.33_TF

Softwares_used_in_UPOE_cluster:

  • Ganglia_:_monitoring_tool
  • PVM_:_parallel_virtual_machine
  • HPC_software_:_High_performance_LINPAC_(performance_testing_tool_)
  • Software's_used_in_HPC_cluster
  • R,_Amber_+_Q.C_tools,CID_in_RNA,_GRID,_GOLPE,_ALMOND,_MOKA,_VOLSURF,_METASITE,_HMMER,_INFERNAL,_BLAST,_MATLAB,_GNUPLOT,_TEIRESIAS,_OPENEYE,_ADF,_AUTODOCK,_GROMACS_etc.,
  • Cluster_Services:_411Secure_Information_Service_:The_411_Secure_Information_Service_provides_NIS-like_functionality_for_Rocks_clusters.
  • Scheduler_used:
  • Sun_Grid_Engine:_Job_scheduler_software_tool,_in_this_we_have_already_implemented_the_fair_share_policy_i.e._all_users_can_get_equal_priority,_using_this_we_can_submit_batch_and_parallel_jobs

Application_softwares_and_compilers:

  • Open_MPI_Lam_MPI
  • C,_C++,_FORTRAN_compilers_(both_GNU_AND_INTEL)
  • Bio_roll:_for_Bio-Chemical_applications