Enterprise who usually run an Oracle SOA Suite BPEL environment to process workflows manage production and multiple test environments where they may need to mass abort instances. Especially to reset the Oracle SOA BPEL environment before running a non-regression or performance test.
Implementing a working Purge script to get rid of old Oracle SOA BPEL instances is one of the most important tasks to maintain a working Oracle SOA Suite environment in production. Forgetting to do so will have the whole environment slow down as Oracle SOA Bpel database get bigger.
Depending on your business requirement, I usually advise purging instances data older than 2 months. Instances data are useful to troubleshoot issues with BPEL processes and shouldn’t be kept once those are successfully completed.
You may also need to update failed instances status to force them in a purgeable state. Especially for unrecoverable instances.(See at the bottom of the gist)
SQL queries to troubleshoot and monitor your Oracle SOA Suite BPEL purge scripts:
Oracle SOA BPEL infrastructures are notoriously hard to scale. Once your production Oracle SOA BPEL Database grows, you’ll often find that your whole Oracle SOA BPEL infrastructure performance may hinder. To fix it, you then need to focus on :
- Implementing the right database purge strategy
- Monitor large instances to improve your composite performances (see https://cedricleruth.com/useful-monitoring-sql-queries-for-oracle-soa-bpel/)
- Implement the proper Oracle Database partitioning
How to partition for performance your Oracle SOA BPEL Database?
Below is an example of partitioning scripts written after helping two of my clients running an Oracle SOA BPEL 11g Infrastructure. This script is provided “as is” without warranty of any kind. Do not use this directly in your production environment. Please read it, test it, and update it to fit your specific instances.
If you run Oracle SOA BPEL on-premise and at scale, you know how easier it can be to monitor your SOA BPEL processes through SQL queries. Those SQL queries get more profitable the more SOA BPEL instances you get, as the Enterprise Manager console usually doesn’t work for big payloads. You’ll find below a couple of SQL queries to :
- Identify BPEL Errors from the last 10 minutes
- Identify Business faults from the previous 10 minutes
- Identify instance not purged after the retention period (here 70 days)
- Get the number of process and their states for each composite name and revision (useful to identify composites due to be undeployed)
- Multiple SQL queries to monitor the database size
- Get the list of composite, revision, and date of their last instances
Useful SQL queries for monitoring Oracle SOA BPEL:
Implementing a global common fault policy is essential when building a SOA BPEL Composites architecture. As your number of production BPEL instances grows, the risk of losing critical data due to a BPEL process crashing in production increases. Especially with large architecture where unexpected usage surge can trigger slowness and failures. Implementing a simple retry fault policy allows the faulted instances to continue once the environment returns to normal.
Implementing a global common fault policy is essential when building a SOA BPEL Composites architecture.
Below is an example of a simple retry global fault policy that will retry up to 7 days for any BPEL instances in error due to a technical issue. To do so, update those three files as follow :
Any organization considering acquiring an ERP software module from a CRM to Finance, HR, SCM, or Sales & Marketing will ask how to integrate their existing data. Whether it is a Cloud or On-Premise ERP, the methods for importing, exporting, and continuously synchronizing data within your organization’s existing systems are the same. The main difference to the Cloud is that it is usually required to duplicate some existing organizational data. In the following sections, I’ll share my typical explanation of those notions with new customers.
First of all, a quick glossary:
- An Enterprise resource planning (ERP) software is a business management software that an organization can use to collect, store, manage, and interpret data from their activities. (Financial Accounting, Management Accounting, Human resources, Manufacturing, Order processing, Supply chain management, Project management, Customer relationship management, etc.)
- Customer relationship management(CRM) allows a company to manage and analyze its interactions with its past, current, and potential customers.
- Cloud computing software is a software model in which customers’ services are available over the internet on a subscription/per-use basis.
- A Middleware is a communication software to enable communication and data management between distributed applications.
- An ETL is a Middleware short for Extract, Transform, and Load. Three functions combined into one tool to pull data out of systems, clean and transform those data, and load them into another software.
Data integration in an ERP Cloud
Data integration is about combining data from different internal and external sources into a single, centralized repository. For example, a business can store customer data in a local database, manage inventory data with a third-party platform, and want to centralize all those data into a data warehouse or an ERP module like a CRM.
Such situations are not uncommon. As a business grows and changes, so do their software and data needs, and a strategy that once made sense needs to be revised.
The ETL process and other modern data streaming approaches are at the heart of data integration. Data integration begins with extracting data from multiple sources and moving them into a single data warehouse. (For businesses and organizations that do not use a data warehouse, the process is similar, although the data will be integrated directly from the source.) To facilitate the integration process, a cloud ERP offers a range of interface points, including REST, SOAP, and a BULK API.
During the transformation step, data is cleaned, validated, organized, and standardized. At this point, all of the different datasets are now in conversation with each other. Finally, the converted data is loaded at its final destination.
Data migration or Data integration?
The terms describe distinct and separate processes. They do, however, share some of the same implementation techniques.
Data integration is combining data from multiple sources, internal and external, into a target system. Data integration describes a unified set of smaller processes. Each process allows the extraction, transformation, and loading of a different data model. (customers, addresses, orders, etc.)
Data migration, involves moving data from one system to another. When a company decides to change its existing CRM system, or when it decides to downgrade from an older version to a more recent one, it must migrate all data from the current software to the new one.
Common integration methods
So far, we’ve provided an overview of the data integration process and how it combines data from multiple origins into one view and source. Some of the different data integration methods include
Manual data consolidation
This part of the process typically requires a conventional ETL, although some companies may use built-in custom tools or a simple excel extraction. Manual consolidation can work well for smaller, more specific datasets that don’t require a deep clean, but it can be too time-consuming and error-prone for more massive datasets. Besides, the lack of real–time data limits its usefulness.
Propagation of data from source applications
The goal here is to propagate the data from the individual applications to the ERP, and the integration logic to achieve this expands in the client applications. Rather than a standard tool or approach to moving data into the warehouse, each application takes responsibility for moving their data to the central store. This method is generally adopted because there can be heavy data cleaning and manipulation, and the application is in the best position to understand and perform these operations.
This approach is challenging to maintain because applications are subject to change, which often means that the integration logic needs to be rebuilt or adjusted.
Propagation of data using a Middleware
This method ignores the logic of application integration and shifts the responsibility to the Middleware. For example, a subscription mechanism configured between the Cloud ERP and the data warehouse ensures that whenever there is an update, an event is triggered to automatically publish the data to the warehouse, keeping it up to date.
Even when applications change, the Middleware maintains his function as a bridge transferring data to the ERP.
For this method to work, there must be an implementation layer that manipulates and transforms the data into a format that the consumer understands.
In virtualization, data is not extracted and stored in a common repository but provides a mechanism to access data remotely from multiple sources.
The technique has the advantage of not having to create and manage a Middleware and offers up-to-date data in real-time without any data replication. It is perfect for highly secure applications that do not allow data to be stored elsewhere. However, this limits the scope of how ERP can use this data. The ERP is also constantly polling these data sources, adding performance loads to those databases.
This technique is not available on all Cloud ERPs.
The challenges of data integration
54% of Salesforce business customers identified integrating apps and data sources as their top challenge. Let’s take a look at a few factors where data integration remains a challenge:
Find the right experts
Integrating a cloud ERP with a data warehouse requires experts in different fields such as Cloud technologies, ERP modules, data warehouses, and Middleware technologies. Building such a team and ensuring that they communicate effectively can be a challenge.
Complexity of systems
Bringing together data from many systems using different technologies and locations can be a complicated task. The scale, volume, and complexity of this process require substantial planning and coordination.
Because data fields tend to be stored with different names and types in data sources, it isn’t easy to map each lot to the destination system. Some of the data sources could also be existing systems with significant data gaps. Solving these issues requires collaboration between business and technical stakeholders, who profoundly understand the data.
Ensure continuous data integration
Data integration is not a one-time task. The initial effort to import data is significant. Nevertheless, you need constant efforts to update the ERP and data warehouse when changes occur automatically.
Despite these challenges, data integration remains an essential part of an organization’s strategy to achieve a unified data view. Having a clear integration strategy and using a data integration tool overcomes these barriers.
Uniform data integration strategy
Consolidating the mix of Cloud and on-premises sources can mean different approaches to integrating their data. However, divergent paths can lead to inconsistent data processing, which in turn can compromise data quality. Creating a uniform strategy that ensures data integrity and synchronization despite systems’ individuality can be difficult.
How to define your integration strategy?
Identify your stakeholders
These can include executive sponsor, cloud ERP experts, data engineers, customers, and other specialists with a comprehensive organizational data view.
Ask the right questions
What are the budget limits, time, and availability of stakeholders?
Does your data need to be available in real-time, or can it be pulled on-demand or in batches?
What works best for your business: manual consolidation, propagating data to a warehouse using applications, reproducing data to a warehouse using a Middleware, or keeping your data bounded using virtualization?
Match the ERP data fields to yours.
Will you be using APIs, direct database access, Queuing, Streaming to manage the integration?
There is no conventional approach to integrating data into an ERP. Some organization sticks to manual integration while others use application logic, a Middleware, or a hybrid approach.
The final solution an organization achieves depends on many factors: the propensity to create a data warehouse, the availability of resources such as time and money, the size of the data sets, needing the data synchronized in real-time.
Reduce recurring human intervention
A data integration tool helps simplify the integration process’s complexity by providing an automated mechanism that consolidates data from multiple sources on-premises and in the Cloud. Such a tool not only enables faster ETL operations but also ensures continuous and real-time updates of the centralized data store. Doing so minimizes human intervention, reduces errors, saves time, and thus increases productivity and data quality.
Additionally, the tool makes it easier to scale as more data sources are added. Rather than having a fragmented approach with a different integration method for each source, the tool offers a consistent solution.
Continuous integration(CI) and continuous delivery(CD) pipelines have become the norm in software engineering. In the corporate world, a CI/CD pipeline is especially useful to ensure your entire development team is following the best quality guidelines and drastically shorten the deployment cycle to satisfy your customers. Thus why all Cloud providers are now providing their CI/CD pipeline platform. In this tutorial, we will focus on how to use the Azure DevOps platform on a Java project.
Create a new Java spring web application on Azure App Service:
- Log on portal.azure.com
- Navigate to the App Services
- Add a new Web App
- Select your existing subscription level
- Remember its name
Create a new Java DevOps project on Azure DevOps:
- Log on dev.azure.com with the same Azure account
- Create a “new project” and enter the needed information
- Navigate to the project settings and activate all the services
- Navigate to the “Repos” Service and import your Java code or Clone an existing git. (For this example take any sample spring web app on GitHub: https://github.com/ragsns/hello-world-spring-boot.git)
- Your Repository code is available as below
Create a build pipeline with Maven on your Azure DevOps Repository:
The objective here is to build your Java application everytime there is a commit or a merge on your master branch.
- Navigate to your Repository > Files > Set up build > New Pipeline > Configure > Maven Package
- Azure DevOps will generate its own azure-pipelines.yml as follow:
- To test your build pipeline, simply modify a file in your repository master branch.
Create a publish pipeline on your Azure DevOps Pipeline tool:
Now that we have a build pipeline to continuously test and build our codebase we can create a release pipeline to continuously deploy our modification to a development server.
- Navigate to Repository > Releases > New Release Pipeline
- Select a Template > Empty Job
- Add a new stage to your empty build pipeline
- On this new release, pipeline navigate to Artifact > Add > Source > java_pipeline_demo_build (or any other name you gave it)
Now we need to specify on wich Azure App Service to deploy our Java artifact:
- On this new release, pipeline navigate to Tasks > Dev > Agent Job > + > Azure App Service Deploy
- Connection Type: Azure Ressource Manager
- Azure Subscription: YOUR_SUBSCRIPTION
- App Service type: Web app on Linux
- App Service Name: THE_NAME_YOUR_PROVIDED_ON_STEP_1
- Package or folder: Select the .war
- Runtime stack: Java SE (JAVA|8-jre8)
- On this new release, pipeline navigate to Pipeline > Artifact > Continuous deployment trigger > Enable
- Name the pipeline
- Create Release
That’s it. From now on each commit on your Azure DevOps repository will trigger a build pipeline. The Java Artifact built will trigger a release pipeline. This release pipeline will add this Java Artifact to your Azure App Service server seamlessly deploying your modifications.
WordPress as a CMS is getting more attractive in its new versions as more and more developers start using WordPress as a tool to run niche membership solutions. One of the first things you’ll need to implement is to display a custom menu depending on your user. For instance, you could promote landing page links to your visitors, account information to your logged-in users, and custom administration report to your administrators.
Here’s how to write a small PHP filter in a custom plugin or in the function.php file to display a custom menu if your user is logged-in, an administrator or a visitor:
Static websites are a great way to run a simple and inexpensive portfolio or landing page. And it’s even free if you leverage the AWS S3 free tier that allows you to host your static website for up to 5Gb storage and 20 000 visitors.
In this tutorial, I will cover on how to host your static website on AWS S3 with an AWS Route 53 domain name.
1.Setup and login to your Amazon AWS account
If you don’t already own an AWS account, head up to aws.amazon.com and register for a free tier account. The AWS S3 free tier is a 12-month free tier offer that is accessible to new AWS customers and is available for 12 months following your AWS sign-up date. When your 12-month free usage term terminates or if your application usage surpasses the tiers, you pay standard service rates for the service you use. For AWS S3, it’s around $0.0004 per user accessing your site, while owning a domain name on Route 53 costs around $12 per year.
2. Create an AWS S3 bucket to host your site
Once your logged into the console and have your static website ready to be deployed, reach the AWS S3 service and create a public bucket :
- Create bucket > Name and region > Bucket Name: yoursitename.com (It needs to be the exact same name as in the domain name your gonna own in Route 53 so ensure it’s available!)
- Create bucket > Name and region> Region: the region where you want to host your site
- Create bucket > configure options: leave as default
- Create bucket > Review: Confirm and create bucket.
3. Upload and enable your static website on your AWS S3 bucket
Now that your bucket is created and accessible to the world, we need to configure it as a website:
- Amazon S3 > YOUR_BUCKET > Overview > Upload: upload your full static website. This website needs to include an index.html and error.html file. (During the upload ensure that the file permission is set to “Grant public read access to this objects“)
- Amazon S3 > YOUR_BUCKET > Permissions > Static Website Hosting: Use this bucket to host a website
You can now access your static website with your AWS S3 direct URL: http://YOUR_BUCKET_NAME.s3-website-YOUR_BUCKET_REGION.amazonaws.com/
4. Use an AWS Route 53 custom domain name to change the URL
To tailor our website URL to http://YOUR_SITE_NAME.com we need to hold the YOUR_SITE_NAME domain name. AWS Route 53 is a service that empowers you to buy for a yearly fee a domain name. Let’s do this in the Route 53 service in your AWS console.
- AWS Route 53 > Register Domain: buy a free domain name for your site. (Need to be the same one as the bucket)
- Wait for your domain name to be registered and available in your hosted zone
- AWS Route 53 > hosted zone > YOUR_DOMAIN_NAME > Create Record Set > Name : (Leave it empty)
- AWS Route 53 > hosted zone > YOUR_DOMAIN_NAME > Create Record Set > Type: A- Ipv4 address
- AWS Route 53 > hosted zone > YOUR_DOMAIN_NAME > Create Record Set > Alias target: select your resource. (If you don’t see it, it means the bucket doesn’t have the same name as the domain and you need to create it anew)
That’s it. You can now access your site with YOUR_DOMAIN_NAME.com
5. WWW and
- Create a www.YOUR_SITE_NAME.com bucket to redirect to YOUR_SITE_NAME.com bucket so your site is also available when typing www.
- To do so create a www.YOUR_SITE_NAME.com bucket
- www.YOUR_SITE_NAME.com bucket > Properties > Static Website Hosting > Redirect: YOUR_SITE_NAME.com
- Go to route 53 and add an A record named www and pointing to this bucket.
- Set up CloudFront and SSL to have access to https
If you are hosting a WordPress installation on a Linux server you may encounter issues updating your WordPress version from the WordPress administration panel.
A good way to handle periodic WordPress updates is by running a weekly Unix command line in your linux cron.
To do so, you’ll first need to install the WordPress CLI toolkit and add it to your path. You’ll then be able to run the core update command. Here is a detailed example of how to do this: