This capstone report presents E.D.E.N. (Every Day, Every Night) — an original continuous improvement framework designed for nonstop, high-dependency operations in industries such as logistics, aviation, digital infrastructure, and healthcare. Drawing on principles from Lean, Six Sigma, high-reliability organizations (HROs), and data-driven decision science, the paper introduces four interlocking pillars: Engaged & Empowered Teams, Data-Driven Continuous Feedback, End-to-End Alignment, and Nonstop Adaptive Resilience. Through detailed analysis of recent global disruptions — including the 2025 CrowdStrike outage, Boeing’s manufacturing failures, the Red Sea shipping crisis, and Taiwan’s semiconductor challenges — the work demonstrates how organizations can embed real-time adaptability, resilience, and continuous improvement directly into their operations. The E.D.E.N. framework is proposed as a new model for achieving operational excellence and resilience in an era where downtime is no longer an option.
This dataset details the force-displacement response of porcine meniscus under tensile-fracture behavior. Samples are cut from the anterior, middle and posterior regions of the meniscus. Each specimen geometry dimension is included.
The Dataset contains raw data that indicates the start and stop time of water flowing at fixtures in the Marian Spencer Hall Cafeteria restroom during hours of operation. The data were collected as part of an effort to develop and test a novel method of measuring flow to calculate the probability that the fixture is busy (fixture p-value). The fixture p-value is one of the parameters necessary to predict peak demand in buildings for pipe sizing purposes.
There are two .csv files, a README file and a sample of the data collection template with contact information. The dataset also contains a MATLAB code written to accept data in the suggested format and estimate the fixture probability of use.
Hypothesis: Electroencephalography and Artificial Neural Networks can be combined to read in a user’s EEG-based brain activity and then to correctly classify that activity.
Goal: This research project aims to combine EEG (electroencephalography) and ANNs (artificial neural networks) by reading in a user's EEG-based brain activity and using an ANN to correctly classify that activity. This specific application aims to classify EEG data of a user being presented with digits (0-9) and letters (A-J).
Process: The project goals are accomplished by building an EEG headset capable of collecting data, generating a labeled dataset (EEG activity is the data, character being presented is the label), and creating ANNs to analyze the labeled dataset.
Results: The EEG headset based on the UltraCortex III from OpenBCI was successfully built. A data collection protocol was created, programs were coded to facilitate this data collection, and the dataset was successfully generated (3160 samples total, 158 samples/character). Several ANNs were created, and these networks were capable of learning and of overfitting on the data, but the classification on test data did not reach accuracy levels beyond chance (more time needs to be devoted in trying different networks and manipulating the data).
Future work: Tasks which are recommended for continuing this project include adjusting network parameters of existing CNNs, trying a wider variety of neural network architectures, trying data mining techniques, extracting more features in different ways from the existing data, collecting more digit and letter EEG data, altering data collection process.
A new formula has been developed that determines the passage of time. In the paper, this is particularized for cases of temporary dilation due to speed and gravity.
Additionally, using the previous equation, an interpretation of the nature of black holes, their formation, growth, and dimension can be developed.
Moreover, and based on all of the above, a different way of understanding mass and space is proposed. Which ultimately implies an alternative expression that relates mass and energy.
The development of complex and dependable systems like autonomous vehicles relies increasingly on the use of systems modeling language (SysML). In fact, SysML has become a de facto standard for systems engineering. With model-driven engineering, a SysML model serves as a reference for the early defect detection of the system under design: the earlier the errors are detected, the less is the cost of handling the errors. Mutation testing is a fault-based technique that has recently seen its applications to SysML behavioral models (e.g., state machine diagrams). Specifically, a system's state-transition design can be fed to a model checker where mutants are automatically generated and then killed against the desired design specifications (e.g., safety properties). In this paper, we present a novel approach based on process mining to improve the effectiveness and efficiency of the SysML mutation testing based on model checking. In our approach, the mutation operators are applied directly to the state machine diagram. These mutants are then fed as traces into a process mining tool and checked according to the event logs. Our initial results indicates that the process mining approach kills more mutants faster than the model checking method.