Simulation - Page 4 of 4 %%%%

Category: Simulation

19 Jun 2019
dynamic systems modeling

Deep Dive Into Dynamic Vehicle Systems Modeling

Let’s take a deep dive into dynamic vehicle systems modeling with a step-by-step example – modeling and simulating a series hybrid vehicle in Dymola®.

A series hybrid has an internal combustion engine that is only used to generate electricity. This means there is no direct connection from the engine to the wheels – instead electric motors are used to provide torque to the wheels.

This example not only demonstrates how components from different domains, like internal combustion engines and electric motors, can be combined to build a complete model of your vehicle, it also demonstrates how to model the control systems. 

To watch a prerecorded video of this scenario please Click here.

Step 1: Modeling the Engine and Crankshaft

We’ll start by modeling the internal combustion engine model. Models are typically created by dragging and dropping component models into a schematic diagram. Notice that the engine model has two inputs. The first input is used to specify the normalized throttle position for the engine and the other, a Boolean signal, used to specify whether fuel should be injected or not. We’ll revisit the topic of how to control this engine later. For now, we’ll send constant signals into the engine as a starting point and switch to a closed-loop control strategy later. To finish building the engine portion of our model, let’s add a rotational inertia of 0.15 kg m² to represent the crankshaft.

Note that the block diagrams used to model control functions are seamlessly combined with the mechanical components. More importantly, notice the difference in the connectors. The block diagram components have arrows on them, indicating information flows through the system. The mechanical connections on the other hand are directionless, acausal connectors. Acausal connectors allow us to build models that are flexible. In this case, we’ve connected the crankshaft to the engine model but we have the freedom to connect it to any other rotational component. We don’t need to worry about whether that component will be a spring, an engine, or a clutch; whatever is needed, we just instantiate it from the library and connect it up. 

Step 2: Modeling the Transmission

With the engine out of the way, let’s start looking at the transmission. Let’s model a simple transmission with a pair of motors from the standard library. One motor is connected to the engine, acting as the generator and the other is connected to the wheels, driving the vehicle. To control the motor, let’s insert a current control block. This component is essentially an actuator, controlled from outside the transmission. The input to this component is the requested motor current. The actual value for the requested current will have to be calculated based on the torque required by the vehicle. For now, let’s simply add a constant input with an initial value zero (motor is not running).

Next, we connect the generator and the motor, and add a ground to the circuit. Our mode is still missing one important thing: batteries to store energy. There are many ways to model batteries. Just to keep things simple, let’s use a large capacitor as the battery and add it in parallel with the motor. This means that electricity generated by the generator can flow either into the battery or into the motor. The motor current actuator determines how much flows one way and how much flows the other. To start the battery out charged, let’s specify the initial voltage of the capacitor as 300V. 

Step 3: Modeling the Vehicle

We’ll start with a simple model for the vehicle. The main effects we need to capture are how torque is translated into a force on the vehicle, the drag force present on the vehicle and the overall vehicle inertia. For this model, we are only interested in longitudinal dynamics, that is, we are only interested in modeling the vehicle moving in a straight line. The first step in modeling the vehicle is to add wheels that transform the torque generated from the transmission into forces that move the vehicle forward in a straight line. Note how the wheel model has a rotational connector on one end indicated by a gray circle and a translational connector indicated by green square to the other. Let’s also add the overall vehicle mass and a damper to represent losses that scale up with speed. In reality aerodynamic drag scales differently, this is just an approximation. So far everything looks good!

Step 4: Simulating the Behavior of the Vehicle

Step 5: Decomposing into Subsystems

To get closer to real-world conditions, we need to refactor this model and improve the control systems. We could start by changing and reconnecting components, but there are a couple things to watch out for. First, when reconnecting things, you run the risk of introducing errors. Second, we may want the original open-loop control version for testing. To address these concerns, we should follow standard configuration management guidelines.

Our first step is to organize the components by subsystems. To do this, we select the components that are part of the same subsystem and create a new subsystem model. Let’s create a new subsystem called EngineController out of the engine control components – the throttle indicator and fuel flag, while preserving the original components as a new model called OpenLoopController. We perform the same actions for the engine, transmission, transmission-control and vehicle models. The system is now composed of subsystems. The next step is to standardize the interfaces for each subsystem.

Step 6: Creating Interfaces

Let’s define a standardized interface for the engine-controller. In our model, the engine control decides what the throttle position should be and whether to fuel the engine. Our current engine control model reflects this by including two output signals. One is a Boolean signal for the fueling command and the other is a continuous signal indicating normalized throttle position. The current engine control model defines both the interface, the above two signals that it needs to work with, as well as the implementation, that it uses open-loop commands. We will separate this model into an implementation and an interface.

We’ll also add one additional input signal to the new interface to supply the engine controller with information about the state of the battery voltage.

We now have an interface which defines what is common across all potential engine control models and a specific implementation that just uses open loop commands. 

Let’s follow the same procedure for the engine subsystem, splitting it into an interface and a specific implementation. This interface includes inputs from the engine controller and an output shaft for connection to the transmission. The implementation includes the internal combustion engine and the crankshaft. We will follow this procedure for the transmission, the transmission-controller and the vehicle model, adding additional sensors along the way. 

Step 7: Creating Vehicle Architecture with Interfaces

To create the vehicle architecture, let’s build a new version of our system model, but this time, using only the interfaces that we have developed. After connecting the interfaces together, we end up with a model that looks very similar to what we had before, except this time we haven’t included any implementation details.

This architecture contains only the interfaces and no implementation has been specified. It captures the structure of our system, regardless of the specific implementations we choose to use. Once the architecture has been created we don’t need to connect subsystem models anymore, all the interfaces have been connected to work across any implementation. Next step is to create a variant of the vehicle and decide which implementations of each subsystem will be included in the variant.

Step 8: Extending the Vehicle Architecture to Implement a Variant

For the base variant, let’s recreate our original model with open-loop control. To do this we need to specify the implementations for each subsystem. At the moment, we only have one implementation for each subsystem. We could directly specify our implementation choices in the architecture model, but a better approach is to leave the architecture model as it is and create a variant from our architecture that captures our specific implementation choices. For this, we simply create a new model that extends from the architecture. When we extend the new model starts from the old model. From there, we can make further changes, like specifying the implementation details for the different subsystems. This allows us to easily create many different variants of the same fundamental architecture. All these models can exist at the same time instead of constantly switching back and forth between different configurations. It is worth pointing out that there is no limit to how many times we can extend from a model. For example, we might extend from our architecture to create a baseline configuration of our vehicle where all the implementations are filled in. From there, the engine designers might extend from the baseline model but insert a more detailed engine model while keeping the transmission and vehicle subsystem models the same. Similarly, the transmission designers might do the same with the transmission while leaving the engine and the vehicle as is. These best practices for configuration management organize the models and support collaborative workflows.

Step 9: Implementing a Closed-loop Transmission Controller

Once we’ve gone through and specified all of our initial implementations, we can again simulate the model. Of course, we’d still have the same uninteresting response because of the open loop elements, but now, we’re in a position to quickly do something about that. For example, let’s create a transmission controller that directs our vehicle to follow a specified speed profile. To do this, we’ll create a new transmission controller implementation by extending from the interface. When we extend, we are not copying the contents on the interface into our implementations. This is important because copying and pasting creates redundancy. By extending, we avoid copying and pasting, making maintaining the models easier. 

Once we create our new transmission controller model by extending from the appropriate interface, we just need to fill in the implementation details. Let’s instantiate a PID controller for speed control with a trapezoidal wave pattern for the drive cycle. 

To incorporate the new controller, instead of creating a whole new vehicle model, we can extend from the original open loop vehicle model and simply change our selection of the transmission controller. Again, we select the subsystem we are interested in, the transmission controller in this case, and we select from a collection of existing controllers. The relationship between different variants of our model is concisely captured. In this case, our current vehicle model, extends from the open-loop vehicle model, but replaces the transmission controller with a different transmission controller.

Step 10: Implementing a State Machine for the Engine Controller

Let’s repeat the procedure for the engine controller, extending from the interface to create a new model and implement a state machine to turn the engine on and off. Taking in the battery voltage as input, the state machine turns on the engine to charge the battery when the voltage is too low, and turns off the engine to save fuel, when the batter voltage is too high. To select this variant of the engine controller, we select the engine controller subsystem ad switch to this implementation. The variant choices in the architecture for each subsystem is automatically determined based on the interfaces they implement.

Step 11: Simulating Vehicle with the New Controllers

With these two new controller implementations, let’s take the vehicle out for a spin. After we simulate the model, we plot the vehicle speed and compare it to the desired drive cycle profile. Here we see the transmission controller is doing a good job of following the drive cycle speed trace. Now let’s look at what is going on with the engine and the battery. Notice how the engine comes on when the battery voltage gets too low and turns off when the battery voltage gets too high. Another important thing to know about the battery is that it is charging and discharging, even when the engine is off. The discharging comes when the vehicle accelerates because the motor takes energy from the batteries to increase the vehicle speed. But how is the battery recharging when the engine is off? The answer is regenerative braking. When did we implement regenerative braking? We did it in the transmission control. The PID controller in the transmission controller requests positive torque from the motor when the vehicle needs to accelerate, and requests negative torque when the vehicle needs to decelerate. Because we are using acausal models, all of our components include balance equations, for things like mass, momentum, charge and so on. In order for these balance equations to work out, the kinetic energy in the vehicle has to go somewhere. The motor turns it back into electricity when negative torque is requested and resulting current flows into the battery, charging the battery in the process.

The important point here is that we don’t need to implement regenerative braking. We implement a mathematical model of each component and then impose conservation equations across all the connections. As a result, we are always assured of accurate accounting. This is important because if model developers had to implement all the consequences of the different modes for each component it would be very easy to overlook something. With acausal modeling, all of this is taken care of. 

Next Step: Leverage the Vehicle Library to Model Your Vehicle

While it’s good to know that all of this is possible with Dymola, it is also important to realize that you don’t need to start from scratch. This architecture based approach is very common and many of the specialized libraries that come with Dymola include not only high quality component and subsystem models but also the interfaces and architectures that give you a head start in building models of your system. You can also import your legacy models into Dymola, either using direct interfaces, or by adopting the standard FMI® interface.

Vehicle Systems Modeling and Analysis (VeSyMA®) is a complete set of libraries for vehicle modeling and simulation. It includes engine, powertrain and suspensions libraries that work in conjunction with the Modelica® Standard Library. In addition, battery with electrified and hybrid powertrain libraries are available as well. Please watch the other videos in this series for more information on Dymola.

Contact Adaptive today to learn more about what our 3DEXPERIENCE solutions can do for you. Then, sign up for our newsletter.

Next Wednesday look for Requirements Simulation for Systems Engineers

Ramesh Haldorai is Vice President, Strategic Consulting, 3DEXPERIENCE platform at Dassault Systémes.

12 Jun 2019

Model-Based Systems Engineering


In the 20th century, systems engineering methodology was developed so that a system could be decomposed into multiple sub-systems and each sub-system could be independently engineered, manufactured and serviced. The emphasis was laid on defining requirement specifications such that the sub-systems and its interactions with other sub-systems were clearly defined. This method emphasized upfront planning, analysis and specification. Hence, the term Requirements Driven Systems Engineering. In practice, it was always very difficult to specify upfront with a high level of accuracy and to resist changes to specifications during development. By and large this methodology has been inadequate and has led to delayed programs and last minute surprises, commonly referred to as the requirements-delay-surprise factor!

In the 21st century, iterative modeling and simulation play a crucial role in systems engineering. A operational model is first developed to understand all usage conditions, including the surrounding environment; then systems models are built and simulated; finally, component models are developed.

Change is integral to this methodology and requirements, structure, and behavior are derived and finalized with the help of the models. In short, the model as the master!

The fidelity of the models is continuously improved during the development and it is possible to substitute models with physical systems, also called, Hardware in the Loop (HIL). When they physical systems are assembled, they are just a twin of the model. Tests conducted on this physical prototype can be continuously correlated against predicted behavior and be used to improve the fidelity of models.

Models are Everywhere

It’s fairly common today for mechanical engineers to develop CAD models and subject them to structural, fluid and thermal simulation. Similarly, a number of models are built by engineers from other disciplines: software engineers use models to specify the operating conditions and interactions between systems, control systems developers build block-based models and generate software code for controllers; electrical engineers develop schematics and layouts of their design; electronics engineers develop high level logic that is synthesized into physical design; hydraulic engineers define hydraulic circuits. When interdisciplinary work is critical, co-simulations are also performed. For example, the thermal and fluid dynamics aspects are simulated together to understand the performance of the climate control systems.

Systems integration nightmare. Since each discipline is working on their own models, most often the first time the engineers witness how the systems function together is when they finally assemble a physical prototype. It’s not uncommon that the physical prototypes require numerous build, test, fix iterations, before they work as intended. The net effect: projects are delayed and quality suffers.

The era of autonomous systems. New types of sensors, complex control algorithms and the integration of on-and-off-board systems are the drivers of autonomous capabilities. This leads to an increase in software-based functionality and E/E complexity never seen before.

Even though models are used by every engineer, they are siloed by discipline, requiring physical prototypes for integration and validation.


Smart, Safe and Connected

Design, validate, deliver the intelligent vehicle experience

Smart, Safe and Connected Industry Solution Experience based on the 3DEXPERIENCE platform connects mechanical, embedded electronics and software development teams on an end-to-end digital platform and enables them to build a multi-disciplinary virtual prototype right from the early stages of concept design.

Make your vehicle models dynamics capable. If you are currently building control systems models in a signal-flow oriented tool, higher fidelity and more accurate control can be achieved by incorporation co-simulation of dynamic physical systems with Dymola.

Signal-flow oriented control models typically don’t fully incorporate dynamic vibrations caused by road and driving conditions. These vibrations affect driving comfort and safety, and if not incorporated in the models, may lead to issues discovered only later during physical testing. Simulating dynamic behavior under various road and driving conditions helps identify and fix issues in the early phases of product development.

Dynamic Systems Modeling and Simulation

The design and engineering of autonomous systems requires a new model-based systems approach – it needs to be multi-disciplinary from the get go! Dymola and the VeSyMA libraries are a capitalization of decades of experience with dynamic systems modeling.

Dymola is a physical modeling and simulation environment, for model-based design of dynamic systems. Dymola adopts the Modelica language and is accompanied by a portfolio of multi-disciplinary libraries covering the mechanical, electrical, control, thermal, pneumatic, hydraulic, powertrain, thermodynamics, vehicle dynamics and air-conditioning domains. Library component models are coupled together to form a single complete model of the system. You can extend existing models or create custom models, improving reuse across projects.

Vehicle Systems Modeling and Analysis (VeSyMA™) is a complete set of libraries for vehicle modeling and simulation. It includes engine, powertrain, suspensions libraries that work in conjunction with the Modelica Standard Library. In addition, battery with electrified and hybrid powertrain libraries are available as well.

Model Import and Export. You can import models directly from Simulink® into Dymola. Dymola also supports the FMI Standard 1.0 and 2.0 for both model exchange and co-simulation.

Real-time Simulation. Dymola supports real-time code generation for simulation on a wide range of HiL platforms from dSPACE®. Co-simulation of Dymola and Simulink generated code has been tested and verified for compatibility with multiple combinations of dSPACE and MATLAB® releases.

Driver-in-the-Loop Simulation. Dassault Systèmes’ partner, Claytex, integrates Dymola with driving simulation software. Libraries built by Claytex include a car template and support for LiDAR, radar and ultra-sound sensor libraries that work with the simulator. Before exporting the model, simulation can be run in a test environment within Dymola as well.

Systems modeling case-study: Energy management of trucks at Arrival.

Source: energyengineering magazine. Low Carbon Vehicle special edition – Summer 2016

The electrification of trucks involves efficient energy management and also needs to maintain the vehicle attributes at the same level as a conventional powertrain system. Hence, it requires detailed studies of vehicle system interactions in order to understand the vehicle system that dominates these attributes. The upfront modeling approach is vital to capture these attributes before developing the physical prototype.

Dymola has a multi-physics modeling capability that is very useful in developing these complex interactions at both vehicle system level and sub-system level, and for pin-pointing the dominant systems or components. All of these vehicle systems/subsystems can be modeled within the same modeling workspace at the top level and then cascaded to a lower level in order to create a series of libraries that can be repeatedly used for different vehicle plant model architectures. This process is important for system modeling, particularly during development phase, giving engineers access to different options to optimize the system architecture for energy management and the improvement of other vehicle attributes. The process minimizes the design and product risks by not committing tooling costs for the prototype build, as the majority of the validation activities can be simulated to produce results that are a close representation of the physical system/sub-system components, which also reduces the development lag-time. Another advantage of system modeling is being able to perform component sizing optimization for energy management in order to improve the vehicle range.

Dynamic Physical Systems Modeling - A Checklist

If you want to incorporate dynamic behavior into your vehicle models, the following are some of the key capabilities of the modeling environment that you may want to consider.

Breadth of Library Models: Are there pre-built libraries for the sub-systems that are included in your system? If your systems are multi-disciplinary in nature, look for libraries across multiple domains containing models for mechanical, electrical, control, thermal, pneumatic, hydraulic, powertrain, thermodynamics, vehicle dynamics, air conditioning, etc.

Object Oriented: Can you directly instantiate the library models and build your systems with ease? Typically, look for a drag and drop interface. Also, look for the ability to abstract subsystems in a single model. If necessary can you modify the library models and create your own derivatives of the models? Model management capabilities are a key requirement if you are working in a team.

Equation Based: Can the dynamic behavior of systems be described by differential and algebraic equations? Does it support the concept of flow and across variables?

Acausal: Does the environment support the definition of equations in a declarative form? Without considering the sequence? This reduces the effort to describe behavior in comparison with procedural languages like C and other block-based tools, where signal flow direction needs to be considered.

For a review of Dymola capabilities, please click here.

Contact Adaptive to find out more about what 3DEXPERIENCE can do for you.

Next week: Dynamic Vehicle Behavior Simulation – A Deep Dive

Ramesh Haldorai is Vice President, Strategic Consulting, 3DEXPERIENCE platform at Dassault Systémes.

30 May 2019
digital twin

Understanding and Planning for Digital Twins

Adoption of digital twins is on the rise. Thirteen percent of Gartner survey respondents that have implemented IoT have also implemented digital twins, and 62% are working on implementation or are planning to implement them in the next year.

Companies that are considering making use of digital twins—and even organizations that wonder what they are and what they can do—will find a new white paper by Gartner, “What to Expect When You’re Expecting Digital Twins,” extremely useful. Its goal is to help companies that might be planning to or already implementing the technology by providing a thorough understanding of the types of digital twins, their relationships to existing business applications, and their potential impact on those apps.

What are digital twins?

A digital twin is a new type of enterprise software component: a digital proxy or virtual representation for a business entity, whether a person, process, or thing, most often associated with IoT-connected items. IoT provides a stream of real time data to analyze the state of business. Digital Twins are used to increase situational awareness to monitor the overall health of a part or process – does it need to be replaced soon? Is it wearing faster than usual? Digital twins help gain a better understanding of how business resources evolve and change—both of which then drive improvements in commercial processes and other forms of business value.

Gartner identifies three different types of digital twins and defines an emerging role for each, as well as their relationship to each other. Each type of digital twin monitors and optimizes a different scope of individuals, assets, processes, and operations within a company:

  • Discrete digital twins focus on individual assets, people, and other physical resources.
  • Composite digital twins involve a combination of discrete digital twins and resources.
  • Digital twins of organizations (DTOs) maximize value across specific commercial processes or entire business operations.

Does Every Product Warrant a Digital Twin?

The key issue around digital twins is determining what really needs to be a digital twin. The larger, more complex a product is, the more likely a digital twin makes sense. For example, an airplane or ship would be ideal candidates for a digital twin because you would want to have proactive maintenance on those large assets and likely use IoT for feedback mechanisms on those parts and systems. However, if you make smaller products like a phone or something ubiquitous, having a digital twin on every serialized phone or part would not make sense. Perhaps only for major models or releases is where creating a digital twin makes sense. 

Digital twins introduce massive amounts of data, and serious thought needs to be given to what data is being collected and where it is shared across the business. This is where a PLM platform like 3DEXPERIENCE comes into play, it is designed to help manage product variations and how information is managed for every major design and instantiation.  Jonathan Scott from Razorleaf recently wrote an interesting article titled “Start Now:Profiting From The Digital Twin Can Take Time,” I’d recommend reading it as he outlines some of the challenges mentioned here in more detail.

Getting back to the Gartner paper — it details different digital twin design patterns and key characteristics, including proliferation, complexity, inheritance, organization, and interoperability. Common to all three types of digital twins are the two vital roles they perform for business: improving situational awareness and providing information to help companies make better business decisions.

Digital twins aren’t meant to replace business applications, but to extend their value. While they may come as an embedded part of newer, IoT-native applications, they can also be added to existing, pre–IoT era apps. Gartner offers suggestions for how to acquire them—whether they are part of purchased software, pre-developed modules to be integrated into existing software, developed in-house for integration, or outsourced for custom development—as well as recommendations for how to plan for and utilize them.

Want More Information on Digital Twins?

For a copy of the Gartner paper, please complete our Contact us form and we will email you the paper. Or, if you’d like discuss how you might incorporate digital twins into your organization, our PLM consultants would love to have that conversation. Contact us and we will be in touch with you shortly.

26 Apr 2019

Adaptive’s Cynde Murphy to Speak at V&V Symposium

V&V Verification and Validation Symposium
Conference May 15-17
Training & Committee Meetings May 13-14
Westgate Resorts, Las Vegas, Nevada

Dynamic Load and Weld Fatigue Calculation for Validation of a Telescoping Boom Chassis presented by:

Adaptive’s Cynde Murphy, Simulation and Services Manager
Bob LeGrande, Engineer Principal III, Terex Corp., AWP Division – Genie Industries
Kyle Roark, Engineer Principal II, Terex Corp, AWP Division – Genie Industries

Analytical simulation is a powerful tool that can allow for understanding the dynamic behavior and fatigue life of any structure. However, one of the most challenging tasks involved with developing a simulation is developing accurate and realistic load cases, which replicate field strains in the structure, so that it may be used for validation. Once a representative finite element model (FEM) of a structure is created, challenges arise when understanding and applying dynamic loads to the FEM so that correlation and validation with physical testing is accurate. One step further in complexity is being able to calculate dynamic stress profiles for the entire structure, and use those results for further investigation, in this case fatigue estimates.

Historically, analysts have had to rely on expensive prototyping and time-consuming full vehicle measurements, even within the iterations of one design concept. Analyze-Build-Test is quickly becoming a thing of the past, as product development companies strive for quick to market designs. Simulation experts at Adaptive Corporation, in conjunction with Terex, were able to circumvent this traditionally laborious process and develop an efficient and accurate validation process. Our team has leveraged the use of ANSA, ABAQUS, Wolf Star Technologies True-Load™ software and Fe-Safe, to develop an FEM, understand the dynamic mechanical loads and develop a duty cycle for the Terex telescoping boom chassis. This body of work can and will be subsequently used for design, simulation, fatigue analysis, validation and engineering development of iterations of the same chassis structure, as well as similar chassis designs.

The general steps of the process are as follows: FEM Creation of Assembly FEM of the chassis structure was created using ANSA pre-processor software. Instrumentation and Data Acquisition To successfully calculate mechanical loads acting on the Terex Telescoping Boom Chassis for the FEM, accurate strain measurements were required. Using the FEM and the Wolf Star Technologies True-Load™ software, optimal strain gages placement was identified and installed on a test chassis. Strain gage time history data for various proving ground events, such as cornering, washboard, curbs and potholes. Load Calculation and Development Using the time history data for the optimal map of strain gages, the FEM and True-Load™ software, equivalent (dynamic) unit loads were calculated. Fatigue Analysis Using dynamic stress results from the FEM, given a duty cycle that included a combination of various proving ground events, fatigue life estimates of the Terex Telescoping Boom Chassis and critical welds were calculated. Calculating the mechanical loads for this project allows Terex the ability to rapidly iterate on designs for this chassis, as well as provide a starting point for other similar chassis designs. This ultimately saves time and money in their product development cycle by reducing the efforts of traditional design-build-test cycle, by means of “virtual validation.”

If you aren’t able to attend the V&V Verification and Validation Symposium tune in to our on demand webinar: Dynamic Load Calculation and Correlation of an Aluminum Truck Body,” also presented by Cynde Murphy.

28 Mar 2018
Abaqus Knee Simulator | SIMULIA | Adaptive

New Video: Abaqus Knee Simulator – Accelerating the Design of Knee Implants

A new video now available from the Dassault Systèmes SIMULIA group demonstrates how the Abaqus Knee Simulator application can accelerate the advanced design of knee implants using finite element (FE) analysis and 3D modeling. The video is hosted by Cheryl Liu, Ph.D. of Dassault Systèmes SIMULIA and Paul Rullkoetter, Ph.D. of OrthoAnalysts.

What is the Abaqus Knee Simulator?

The Abaqus Knee Simulator is a validated computational modeling tool for performing basic to advanced knee implant analyses and simulations. This tool offers five fast and easy-to-setup workflows which reduce your reliance on time-consuming trials and expensive lab equipment, while still meeting regulatory requirements. The video includes an overview of the five workflows, validation of the model, and a demonstration of the software tool.

The Benefits of the Abaqus Knee Simulator versus Physical Simulation

The Knee Simulator is an application that works with Dassault Systèmes SIMULIA software. The application includes five pre-Abaqus Knee Simulator - Five Workflowsvalidated workflows knee implant design engineers can use to test their designs without the time-consuming process of creating physical models. The five workflows include; Contact Mechanics, Implant Constraint, TibioFemoral Constraint, Basic TKR Loading, and Wear Simulator.

Dr. Chiu explains how the Contact Mechanics workflow can take up to four hours to run. In contrast, the creation of a physical model to conduct the same test could take four weeks and cost approximately $14,000.

About Abaqus

Today, product simulation is often being performed by engineering groups using niche simulation tools from different vendors to simulate various design attributes. The use of multiple vendor software products creates inefficiencies and increases costs. SIMULIA delivers a scalable suite of unified analysis products that allow all users, regardless of their simulation expertise or domain focus, to collaborate and seamlessly share simulation data and approved methods without loss of information fidelity.

The Abaqus Unified FEA product suite offers powerful and complete solutions for both routine and sophisticated engineering problems covering a vast spectrum of industrial applications.

View the video here.


06 Oct 2016

Learn How Trek Bicycles Used SIMULIA to Take Its Bikes to the Next Level

trek-bike-thumbIn its ongoing race to advance bike performance, Trek wanted to expand the use of realistic simulation in their design cycle across multiple bike programs.  In particular, Trek engineers wanted to better understand how bikes performed under the real-world rides of its professional racers.

Simulation provided new insights into the loading environment during extreme- use cases that will help Trek take its bikes to that next level without over designing. The result was that the larger
Trek engineering group can now tap into these capabilities and leverage field data to improve lab testing processes and enhance product performance across the enterprise.

“Up until then, I didn’t think handheld had the accuracy, the resolution, the quality, that the tripod-mounted scanners have… But now I know that isn’t the case.” — Jay Maas, Analysis Engineer, Trek Bicycles


Read in detail about Trek’s experiences in the case study available here.


02 Sep 2015

Adaptive Acquires Leading Edge Engineering

Adaptive Corporation Strengthens Virtual Simulation Capabilities through its Acquisition of Leading Edge Engineering, a Reseller of Simulation and Analysis Tools and Services for New Product Design and Development

Hudson, OH – September 1, 2015 – Adaptive Corporation has fully acquired Leading Edge Engineering P.C., a technology and services company focused on solving product design challenges using virtual simulation software. The technology, also referred to as computer aided engineering (CAE) software helps engineers develop better performing, more cost effective products in less time. Through this acquisition, Adaptive expands its expertise and footprint for addressing simulation needs in a broad range of multi-physics disciplines including load development, fatigue analysis, vehicle dynamics, process automation and optimization.

“Simulation is a very important part of the design process” stated Eric Doubell, chief executive officer of Adaptive Corporation. “We are seeing an increase in demand from our clients around simulation expertise that can help root out potential design flaws before a product design is released to production. Providing technologies that can create virtual models, test the loads and analyze what-if scenarios helps to ultimately accelerate our clients go to market delivery. We welcome Leading Edge customers into our Adaptive client community and look forward to working together as we align our resources.”

About Simulation and Analysis Software

The need for Simulation grows as manufacturers continue to improve and streamline a collaborative design process and accelerate their ability to go to market with new product development introductions (NPDI). A recent study by Aberdeen Group noted these impacts:

Virtual simulation users have seen a 10% decrease in Engineering Change Orders (ECOs), while those who were still relying on manual calculations have increased ECOs by 8%. What this means this those companies using simulation software are able to fix their designs before they get to production, unlike those who are using manual methods, who fix their products afterward.1

CIMdata predicted that Simulation and Analysis will be one of the more rapidly growing segments within the tools sector of PLM over the next five years, and forecasts that this sector of the market will exceed $6.7 billion in 2019.2

“We have been at the forefront of simulation technologies since we founded the company” stated Wayne Tanner, President of Leading Edge Engineering.  “We have seen the need for comprehensive Simulation tools grow as the complexity of new product development increases along with the requirement to collaborate more effectively to reduce risk and automate the design processes.  We are excited to be a part of the Adaptive team where we can address the full scope of PLM challenges and help our clients automate their PLM needs leveraging their portfolio of solutions under one umbrella.”

About Adaptive Corporation

Adaptive Corporation is dedicated to connecting virtual design to the physical world by creating solutions that help their clients, innovate, validate and refine the design of new products being introduced in the market.  With Adaptive’s technical expertise, your design engineering team can be assured you will build products right the first time, with the most efficient and profitable path possible.  Covering the full span of Product Life Cycle Management solutions, Adaptive addresses Virtual Product Design, Product Data Intelligence, Enterprise Collaboration, Digital Manufacturing, Simulation and Metrology/Quality Control. For more information, visit


1 Aberdeen Group: “The Value of Virtual Simulation As Opposed to Other Tools”, December 2014

2 CIM Data: CIMdata Simulation and Analysis (S&A) Market Analysis Report, July 2015

PR Contact

Juliann Grant
(330) 672-0022 x7151