University of North Texas
Arizona State University
University of Texas at Dallas
Last Reviewed: 01/14/2020
The Net-Centric and Cloud Software & Systems Industry & University Cooperative Research Center I/UCRC (NCSS I/UCRC) performs basic research needed for the development of software and systems for networked and cloud computing environments. This includes developing and verifying secure, resilent and efficient software and hardware forming a net-centric and cloud computing systems
The NSF Net-Centric and Cloud Software & Systems Industry/University Cooperative Research Center (Net-Centric and Cloud I/UCRC) performs the basic research needed to completely restructure software and systems for networked and cloud computing environments. Since these applications will be available via networks, and since some of the most important applications will be time critical and potentially life-sustaining, the development of net-centric software demands flawless handling of security and dependability. The Center's research is driven by the needs of the industrial members of the center.
We develop programs for civilian applications, integrated communication systems, networked sensor systems, and command and control systems.
We can see a shift in military services from platform-based systems (work independently) to network-centric, integrated computing infrastructures when operating, for example, planes, ships and missiles.
The current systems leave a lot to be desired, however. Effective integration has yet to be achieved, which is detrimental to the availability, security, interoperability, and cost of current systems. Most contemporary approaches to net-centric software seek to “patch” the existing system. The patch solution makes interoperability even more complex. Therefore, it is our consortium’s goal to upgrade net-centric software from the bottom-up.
The faculty and students of the University of North Texas are focusing on multicore processing, reliability and resource management of networked and cloud computing systems. More specifically the research is evaluating the memory bottlenecks of applications and devising techniques to: (1) change data layouts to improve performance; (2) create new memory organizations for emerging technologies including 3-D stacked DRAMs; and (3) characterizing workloads and developing autonomic, proactive resource management for analyzing failure behaviors and adapting workloads to maintain service level agreements in the presence of failures.
The faculty and students at the University of Texas site focus on Service-Oriented Architectures, software service composition and QoS when running on net-centric and cloud computing systems.
The faculty and students at the Arizona State University site focus on communication systems including signal processing and communication protocols.
3D DRAMs and Processing in Memory
The primary focus of this research is to explore new memory organizations for emerging technologies such as 3D stacked DRAMs and solid-state disks including Phase Change Memories and Flash memories. In previous projects we investigated memory organizations to exploit these technologies. We evaluated 3 different organizations (3D DRAM as main memory, 3D DRAM as last level cache, 3D DRAM and PCM together as main memory). In this continuing project, we want to explore how to utilize the logic layer of a 3D stacked DRAM. We will investigate whether common functionalities of emerging applications (Big Data, Cloud) can be embedded in this logic layer, thus migrating some level of processing to memory (PIM).
A Goal-Oriented Approach for Obtaining Good Private Cloud-Based System Architectures
The fast-growing Cloud Computing paradigm makes it possible to use unprecedented amounts of computing resources at lower costs, and provides other benefits such as fast provisioning and reliability. In designing a robust architecture for a cloud-based system that meets the goals of all stakeholders, the numbers, types, and layouts of devices must be factored in from the earliest stages of design. However, there seems to be a lack of methodologies for incorporating stakeholder goals into the design process for such systems, and for assuring with higher confidence that the designs are likely to be good enough for the stated goals. In this project, we propose a goal-oriented simulation approach for cloud-based system design whereby stakeholder goals are captured, together with such domain characteristics as workflows, and used in creating a simulation model as a proxy for the cloud-based system architecture. Simulations are then run, in an interleaving manner, against various configurations of the model as a way of rationally exploring, evaluating and selecting among incrementally better architectural alternatives. We illustrate important aspects of this approach for the private cloud deployment model and report on our experiments, using a smartcard-based public transportation system.
A QoS-Aware BPEL Framework for Service Selection and Composition Using QoS Properties
The promise of service oriented computing and the availability of web services in particular promotes delivery of new services composed of existing ones, i.e., service components are assembled to achieve integrated computational goals. Business organizations strive to utilize the services and provide new service solutions Appropriate tools are needed to achieve these goals. As web and internet based services grow into clouds, inter-dependency of services and their complexity increases tremendously. The cloud ontology depicts service layers from a high-level, such as Application and Software, to a low-level, such as Infrastructure and Platform. Each component that resides at one layer can be useful to others as a service. This hints at the amount of complexity resulting not only from horizontal but also vertical integrations in building and deploying a composite service. Our framework tackles the complexity of the selection and composition issues with additional qualitative information to the service descriptions using Business Process Execution Language (BPEL). Engineers can use BPEL to explore design options, and have the QoS properties analyzed for the design. QoS properties of each service are annotated with our extension to the Web Service Description Language (WSDL). In this project, we describe our framework and illustrate its application to the performance aspect of QoS. We translate BPEL orchestration and choreography into appropriate queuing networks, and analyze the resulting model to obtain the performance properties of the composed service. Our framework is also designed to use of other QoS extensions of WSDL, adaptable business logic languages, and composition models for other QoS properties.
A Semantic-Based Semi-Automated Role Mapping Mechanism
Role-based access control (RBAC) has been widely adopted by industry and government. However, RBAC is only suitable for closed enterprise environments. Role mapping can be a tedious task for the security officers if it is done completely manually. Yet, performing role mapping automatically incur security risks. In this project, we introduce a semi-automated role mapping process, where promising role mappings are generated automatically and recommended to the security officer(s). The security officers then approve or modify the recommended role mappings. We present a method to automatically generate role mappings based on the similarities of the roles in two role hierarchies. We use an example to illustrate our approach and show its feasibility.
Access Protocols in Data Partitioning Based Cloud Storage
Existing share access protocols require the client to retrieve all shares even for read accesses to achieve atomic semantics. In cloud storage with widely distributed servers, this implies significant communication latency (from client to the farthest server) and additional network traffic. In this project, we consider a nearby share retrieval approach to improve read performance. We first analyze the impact of this approach on the consistency semantics. Then we present the nearby share retrieval (NSR) protocol that satisfies regular semantics and guarantees wait-free reads. Experimental results show that our protocol yields significantly better read performance than existing protocols. To further optimize the performance of read accesses, we setup experiments to analyze the performance impacts of the number of shares to be retrieved in one round. Experimental results show that for most of the data, using the least required number (the threshold number in (n, k) data partitioning schemes) yields the best performance. But for hot data (with high access rates), accessing more servers in one round can achieve better performance.
Internet of Things
Internet of Things (IoT) offers the potential for developing high-quality-of-life systems for many applications such as healthcare, smart vehicles, smart homes, infrastructure monitoring, agriculture, supply chain management, etc. Several challenges need to be addressed for developing dependable and secure IoT applications. These include technologies to ensure adequate power for low energy IoT devices, real-time collaborative control, timely communication and coordination between various IoT entities, reliable and secure data transfer and storage, and high system availability and reliability. In this project, we propose to develop a customizable distributed framework to achieve a highassurance integrated platform for deploying highly dependable IoT systems. We will develop advanced techniques for (1) effective task management for optimal performance and power usage; (2) learning-based intelligent real-time control decision process with a multi-layer integrated mechanism; (3) optimal power management for battery powered devices; (4) end-to-end system security; and, (5) coordination and redundancy management to achieve high system dependability. A case study IoT system will be developed to evaluate the distributed framework for IoT systems.
Nemesis: Automated Architecture for Threat Modeling and Risk Assessment for Cloud Computing
What are the types of threats facing cloud assets? Is there any scale to quantify threat levels? Is there any metric to characterize critical vulnerabilities? In this project, we present a novel automated architecture for threat modeling and risk assessment of cloud systems called Nemesis, which address all the above and other related questions. With Nemesis, we use ontologies (knowledge bases to model the threats) and assess the risks to any given cloud system. To realize this feat, we built ontologies for vulnerabilities, defenses and attacks and automatically instantiate them to generate the Ontologies Knowledge Bases (OKBs). These OKBs capture the relationship between vulnerabilities, defenses mechanisms and attacks. We use the generated OKBs and Microsoft STRIDE model  to classify the threats and map them to relevant vulnerabilities. This is used together with the cloud configurations and the Bayesian threat probability model to assess the risk. Apart from classifying the given cloud system’s threats and assessing its risk, we deliver two useful metrics to rank the severity of classified threat types and evaluate exploitable vulnerabilities. In addition, we recommend an alternative cloud system’s configuration with a lower perceived risk, and mitigations techniques to counter classified threat types. For the proof of concept of our proposed architecture, we have designed an OpenStack’s based cloud and deployed various services. We evaluated our Nemesis and documented our findings. Our proposed architecture can help evaluate the security threat level of any cloud computing configurations and any configurations of shared technologies found in computing systems.
Ontology of Secure Service Level Agreement, Big Data, MapReduce
Maintaining security and privacy in the Cloud is a complex task. The task is made even more challenging as the number of vulnerabilities associated with the cloud infrastructure and its applications are increasing rapidly. Understanding the security service level agreements (SSLAs) and privacy policies offered by service and infrastructure providers is critical for consumers to assess the risks of the Cloud before they consider migrating their IT operations to the Cloud. To address these concerns relative to the assessment of security and privacy risks of the Cloud, we have developed ontologies for representing security SLAs (SSLA). Our ontologies for SSLAs can be used to understand the security agreements of a provider, to negotiate desired security levels, and to audit the compliance of a provider with respect to federal regulations (such as HIPAA).
Silverlining: A Cloud Forecaster Using Benchmarking and Simulation
Chief Information Officers (CIOs) are asking the question, “Should I commit my organization’s software applications to the cloud, and if so what would be the performance and cost?” The Silverlining research project provided twenty-one University of Texas at Dallas (UT Dallas) graduate students with a platform to answer this question, by benchmarking the performance and cost of applications in the Google GAE cloud. The industry standard On Line Transaction Processing (OLTP) benchmarks and the follow-on cloud simulation forecaster were designed to guide industry CIOs, as well as system/software engineers, through the new cloud applications development and operations lifecycle. The UT Dallas Silverlining team extended the Transaction Processing Performance Council’s (TPC) OLTP benchmark to operate, for the first time, over the internet into the Google App Engine (GAE) cloud using two very different database engines (CloudSQL and Datastore/NoSQL).
Sliding Window Technique to Detect the Presence of LTE
This project explores the synchronization signals used by the Long Term Evolution (LTE) with a main focus on detecting the presence of LTE coverage. The LTE system uses its central 62 subcarriers for the purpose of synchronization and broadcast system information. We use sliding window technique over the fading channels with different numbers of channel taps to detect the presence of two synchronization signals which in turns indicates the presence of LTE.
Sequential Wireless Sensor Network (WSN) Discovery
Wireless sensor network applications often require deployment in large areas that are subject to impulsive noise. Robust performace, scalable and reconfigurable clusters with long network lifetimes and the ability to computer functions of measured values is also very desirable. Hence sensor networks and associated algorithms must be designed to be low power, location aware, and capable of robust parameter and distributed function computation in the presence of channel noise. Due to the reconfigurable nature of the clusters, such systems must complete computations without relying on fixed clusterheads or fusion centers. Such a design requires a fully distributed consensus system for in-network computation. For this project, we propose to: (a) study the effects of nonlinear functions for robust performance and power-aware transmissions; (b) design and analyze distributed computation for fully distributed systems; (c) develop algorithms for sequential sensor localization and study the effects of error propagation; and (d) develop two testbeds using commercial off-the-shelf mobile devices and software radio hardware to test, profile, and validate the algorithms developed in this project.
Sensors and Machine Learning for Condition Monitoring
Condition monitoring is the process of determining equipment health and predicting machine failures before they occur. This is done to minimize spare part cost, system downtime, and time spent on maintenance. This monitoring process can be automated by using a variety of sensors that collect massive amounts of raw data from the machinery being monitored. Such sensor data is analyzed with statistical machine learning approaches for development of the monitoring systems. In this project, we have developed standardized work flows which leverage: (1) libraries of predefined solutions allowing for easy definition of normal and abnormal signatures for specific situations; and (2) use of inertial and magnetic sensors. We investigate methods to select features and specific sensors that provide optimal results. We also adapt machine learning algorithms to our application.
Underdetermined Direction-of-Arrival / Array Processing Using Virtual Array Concepts
In this project, we develop approaches and performance bounds for techniques that extend the degrees of freedom of sensor arrays to identify a greater number of targets than available sensors. Some progress has been made in this area previously. For active sensing, examples include multiple-input multiple-output (MIMO) radar. For passive sensing, examples include building virtual arrays by using higher order cummulants. We extend these approaches by considering a general set of signal separation approaches to develop a larger number of degrees of freedowm and we investigate general bounds on performance.