Post on 15-Mar-2018
Oracle WebLogic 12.2 New Multi Tenancy FeaturesAmersfoort, 15 June 2017
By Michel Schildmeijer,
Agenda /Topics
• Update & Roadmap
• Microcontainers / Partitions
• Multitenancy features
• Continuous availability
• JVM resource Isolation
WebLogic 12.2 Update & Roadmap 12cR1 (12.1.3)
• WebLogic Server and Coherence 12c,now with FMW 12c
• Oracle Database 12c Integration• Java EE 6 with important Java EE 7 APIs• Java SE 7 and Java SE 8• Developer focus – Maven,Hudson,
Developer Cloud Service• Available in Cloud
• WebLogic Multitenancy• WebLogic Continuous Availability• Coherence Persistence and Federated Caching• Java EE 7• Automated Elasticity• REST Management• Java SE 8
WebLogic 12.2 Update & Roadmap 12cR2 (12.2.1)
WebLogic 12.2 Update • Complete CAF(Cloud Application Foundation) Multitenancy:
• JCS and FMW • Continuous Availability Enhancements• Cloud Optimizations• Compatibility/Usability• Security
WebLogic Key updates
• Improved high density deployment features• Improved availability• Ready for DevOps
WebLogic Server Continuous Availability: Multi Data Center
Active-Active Continuous Availability
WebLogic WebLogic
Coherence
Global LB
OTD OTD
WebLogic Server Continuous Availability: Multi Data Center
Automated DataCenter Setup/Failover
WebLogic WebLogic
Coherence
Cross-Domain Transaction Recovery
Transaction Replication for Automated Recovery
Multi-Datacenter Federated Caching Cache Coherence across sites and clusters
Replicate State for Multi-Datacenter Deployments
AGL
MS1 TM
AGL
Site1 Site2
WLS Domain
Data Guard
WLS Domain
Managed Server
TM
Managed Server Transaction
Manager
WLS Domain
Managed Server Transaction
Manager
Hub & Spoke Active/Passive
Active/Active
DC1 DC2
DRDC1a DC1b
JDBC TLog JDBC TLog
Zero Downtime Patching
Continuous Updates with:
□ Automated patch rollout
□ Rollback on error
Reduce Application Downtime
Zero Downtime Patching
Continuous Updates with:
□ Automated patch rollout
□ Rollback on error
Application Downtime
Recoverable Persistent CachingStorage of cached data and metadata with automated
recovery
Multitenant Live Partition MigrationMigrate Running Tenant Partitions
Application Downtime
WebLogic Multitenant Partition Portability
• Move running partitions and resource
groups from one cluster to another
without impacting application users.
• Eliminate application unplanned
downtime
• Current scope:
□ Resource group
□ Clusters within a domain
□ Webapps
□ Requires OTD
Live Migration
Cluster 1
Partition 1
Cluster 2
Partition 1
Heart Beat
Oracle Traffic Director
Fast, scalable, SW LBR
Oracle Site Guard
End-to-end Disaster Recovery automation.
Support for Site Failover
Global Load Balancer
Data Guard
EM Cloud Control
EMCC HOSTMgmt
serviceSite
Guard PluginEMDB
Repository
WebLogic
Coherence
FS Replication
WebLogic
Site 1Primary
Coherence
WebLogic
Site 2Standby
Coherence
Oracle Traffic Director 1
Oracle Traffic Director 2
Developer focus
• Java EE 7 API Support• Quick Installer
□ Approximately 200 MB
□ Replaces zip distribution for developers
□ Single command install via OUI - no GUI
□ Patchable via OPatch
• Deployment performance improvements□ Mainly because of parallel deployment
• Oracle Enterprise Pack for Eclipse□ Net Installer included in WLS installations
• NetBeans and JDeveloper/ADF support
Developer focus
• Fastswap deployments□ Re initiate classes at runtime□ Only in development mode
Automated Elasticity for Dynamic Clusters
• Administration APIs for Dynamic Clusters
• Start/stop a specified number of servers
• Expand/shrink the size of the cluster
• Manage server lifecycle
• Simple/automated scale up/down or tune
• Rules-based decisions based on capacity,
demand or schedule
• WLDf Watches, Notifications changed to
Policies, Actions
• Policies: SmartRules, Calendar-based
policies
• Actions: scaleUp, scaleDown, REST, script
• Peak Loads, Adding Partitions, Batch
Processing, Rebalancing
Monitor(e.g. Load)
Action(e.g. Scale-Out)
Cluster Scale-Out
Server 1 Server 2 Server 3 Server 4
App App App App
Admin Server
SmartRules
WebLogic REST Management API
• REST API for managing WebLogic
• Flexbible for DevOps
• Operational needs
• Covers all of WebLogic management
• Configure, Start/Stop, Deploy, Monitor
• Flexible
• HTTP, no WebLogic client jars
• Generated from WebLogic MBeans
WebLogic Domain
Admin Server
REST WebApp
Managed Server
REST WebApp
Managed Server
REST WebApp
REST
REST REST
Other Improved features
• Deployment Performance Enhancements□ Application class loading in parallel.
□ Indexing of class finder data
□ Deployment caching helping large deployments process faster.
□ Scanning caching for libraries and applications, for faster server restart and deployment time.
Other Improved features
• JDBC Data Sources□ Simplified Driver Installation/Update□ Proxy Data Source Support□ Connection Leak Profiling Enhancements□ JDBC Object Closed Usage—Collect profile information about application
components that close a connection, statement, or result set.
□ Local Transaction Connection Leak—Collect profile info about application leaks;no commit or rollback the transaction
□ Support for Encrypted Passwords in a Data Source Definition
Other Improved features
• JTA/JDBC□ Transaction logs in the database (JDBC Tlogs)
□ No transaction TLog writes (No TLOG)
□ Logging Last Resource (LLR) transaction optimization
•Messaging for Multitenancy□ JMS Modules, JMS Resources, Path Service, Stores□ Integration solutions, including the Messaging Bridge, JMS pools,
and Foreign JMS Servers.□ Store-and-Forward (SAF) agents, including JMS SAF.□ AQ JMS using Foreign JMS Servers.□ Easier JMS Cluster Configuration and High Availability
Enhancements
Other Improved features
□ Automatic Service Migration: Automatically restart a failed JMS instance on a different WebLogic Server instance.
□ Restart-In-Place: Automatically restart a failed JMS instance on its running WebLogic Server instance.
□ Cluster Targeted SAF Agents, Bridges, and Path Services
□ Fail-back: Return an instance to its original host server when the host server restarts.
Other Improved features
□ JMS Clustering
Oracle WebLogic Multitenancydetails
Multitenancy – what is it?
• Is the opposite of multi instance
• One software instance for multiple tenants
• Sharing software infrastructure and resources
Multitenancy – Why?
• Breakdown monoliths into manageable apps containers
• Sharing but also isolation
• Can be cost effective – sharing resources
• In the cloud Multitenancy where security and Isolation is important
Multitenancy – Considerations
• Strong security
• Strong performance
• Strong monitoring
WebLogic Multitenant Use casePluggable Partition as a Deployment Unit
JVM
APP1
JVM
APP3
JVM
APP2
JVM
APP4
JVM
PARTAPP1
PARTAPP3
PARTAPP2
PARTAPP4
Multiple instances Single instance
PARTAPP5
PARTAPP7
PARTAPP6
PARTAPP8
WebLogic MultitenantNot yet supports:
• Oracle Web Service Manager• SOA Suite• Application Development Framework (ADF)• WebCenter• Oracle Service Bus• Oracle Enterprise Scheduler• WebLogic SCA
But coming soon....
Base Technical Concepts MT
TrafficDirector
WebLogic Server
Partition 1
Virtual Target
App App JMSData Source
JNDI
Coherence
Cache 1 Cache 2 … Service
Partition 1
Database
Partition 2
Virtual Target
App App JMSData Source
JNDIPartition 2
Components: Virtual Target, Partition, Resource Group
Partition
Virtual Target
App App JMS Data Source
Resource Group
• Virtual Target– Locate and access the partition (virtual
hostnames, URIs, ports)
– Targeted to server/cluster
• Partition – Runtime/isolation characteristics (Java
isolation/RCM, Work Managers, Security)
– Assigned to virtual target
• Resource group– Applications and resources managed together
– Created within a partition
– Targeted to virtual target
– Group Java EE applications & resources in a distinct unit
Partition Targeting – Virtual Target
• Contains addresses, protocol settings, and targeting so..– how to get to partition and where does it run
□ Host name and port: url.nl:8080
□ Optional URI: /myapp
□ Network Access Point/Channel
□ Protocol specific configuration
▪ Web Server configuration
▪ T3, IIOP configuration
□ Target clusters and managed servers
• Request routing determined by host name and optional URI
WebLogic Server
Partition 1
Virtual Target
• Host:port• URI• Channel• Web Server• T3• SSL • Targets
App App JMSData Source
JNDI
Simple Case Domain Partitions
svr1 svr2 svr3
DP1VT1
Virtual Target
• Name: VT1• Host:port:
• <url1>-mt.com• Target:
• App_cluster
App_cluster
Domain Partition
• Name: DP1• Resource group:
• Based on appRGT
• Target:• VT1
DP1 DP1VT1 VT1
Virtual Target
• Name: VT2• Host:port:
• <url2>-mt.com• Target:
• App_cluster
Domain Partition
• Name: DP2• Resource group:
• Based on appRGT
• Target:• VT2
DP2VT2 DP2 `DP2VT2 VT2
OTD
VT1/DP1VT2/DP2
Common Topology of WebLogic Domain
CLUSTER 1
CLUSTER 2
CLUSTER 3
DOMAIN
Common Topology of WebLogic Domain
CLUSTER 1
CLUSTER 2
CLUSTER 3
DOMAIN
One Security RealmAdministrationLogs
TARGET FORApps, JDBC,
JMS
App1 JMS1 DS1
App2 JMS2 DS2
App3 JMS3 DS3
Topology of WebLogic Domain Partition
RESOURCE GROUP 1
RESOURCE GROUP 2
RESOURCE GROUP 2
DOMAIN PARTITIONSecurity Realm per partitionAdministrationLogs
GROUPS:Apps, JDBC,
JMS
Is targeting missing?
RESOURCE GROUP 1
RESOURCE GROUP 2
RESOURCE GROUP 3
DOMAIN PARTITION
GROUPS:JDBC, JMS,
Apps
App1 JMS1 DS1
App2 JMS2 DS2
App3 JMS3 DS3
Security Realm per partitionAdministrationLogs
Advanced Targetting for Virtual Targets, Partitions, and Resource Groups
RESOURCE GROUP 1
DOMAIN PARTITION 1
CLUSTER 1
CLUSTER 2
CLUSTER 3
DOMAINAvailable Targets:
Virtual Target 1Host/port/URI:
www.url.nl/app1Target: CLUSTER 1
Target=Virtual Target 1
Advanced Targeting for Virtual Targets, Partitions, and Resource Groups
RESOURCE GROUP 1
DOMAIN PARTITION 2
CLUSTER 1
CLUSTER 2
CLUSTER 3
DOMAINAvailable Targets:
Virtual Target 2Host/port/URI:
www.url.nl/app2Target: CLUSTER 2
Target=Virtual Target 2
Advanced Targeting for Virtual Targets, Partitions, and Resource Groups
RESOURCE GROUP 1
DOMAIN PARTITION 4
CLUSTER 1
CLUSTER 2
CLUSTER 3
DOMAIN
Available Targets:Virtual Target 4Host/port/URI: www.url.nl/app3Target: CLUSTER 2
RESOURCE GROUP 2
Virtual Target 5Host/port/URI: www.justid.nl/app4Target: CLUSTER 3
Target=Virtual Target 3
Target=Virtual Target 4
Fusion Middleware Control
• Updated to enable configuration and management of Domain Partitions□ Partition monitoring,
troubleshooting
□ Interacts with Lifecycle
Manager to propagate
changes to Traffic Director,
etc.
THE console for Multitenancy Management
Fusion Middleware Control
• Updated to enable
configuration of WebLogic
managed servers and clusters
• Updated to enable
configuration and
management of Coherence
Managed Servers
• REST and WLST also available
THE console for Multitenancy Management
Resource Group Templates
Conta
iner
Data
base
WebLogic Server
Overrides
Partition 1 – App1
Virtual Target
1 App App JMSData
Source
Overrides
Partition 2 – App2
Virtual Target
2 App App JMSData
Source
App Resource Group Template
Appl deploy1
Appl deploy2
JMS
Mail Session
JDBC Resource
Virtual Target 1• Host:port• URI• Targets
Resource Group
Resource Group
APP1 PDB
APP2 PDB
Security/Identity Isolation
• Realm, users per partition
Isolation for Partitions
JVM Runtime Isolation
• More control between JDK and WebLogic
• Heap, CPU, threads, requests…
Administrative Isolation
• Admin roles, lifecycle, troubleshooting
Isolation for Partitions
Traffic/Data Isolation
• Dedicated JNDI, segregated data
• Dedicated and shared Coherence caches
Security Isolation for Domain Partitions• Per partition configuration
□ Per-partition security realm• Default Roles scoped to partitions
□ Admin, operator, deployer, monitor
JVM Isolation –Resource Consumption Managers
• More integration WebLogic and the JDK
• Prevents resource hogging, protects applications in a shared JVM
• Managed resources
□ Retained heap, CPU time, open file descriptors
• This Resource Consumption Management requires:
□ Oracle JDK 8u40+ and the
□ G1 Garbage Collector.
□ JVM arguments to enable :
Runtime Isolation Within a JVM
-XX:UnlockCommercialFeatures -XX:+ResourceManagement -XX: +UseG1GC
JVM Isolation –Resource Consumption Managers
• Open File Descriptors (file-open): Tracks the number of open files.
includes files open through
□ FileInputStream, FileOutputstream, RandomAccessFile
□ Native IO File channels
• Heap - retained bytes (heap-retained):
□ Tracks the amount of Heap retained in use by a Domain Partition
• CPU Utilization (cpu-utilization): Percentage of CPU time utilized by a
Partition
Runtime Isolation Within a JVM
Policy Model – Triggers and Fair Share, Recourse Actions
• Trigger: threshold of usage of a resource. □ When the consumption of resource crosses a specified threshold, a
specified resource action is performed.
□ Best for environments where usage by Domain Partitions are
predictable
• Fair Share: Similar to the “Fair Share Request Class” in WorkManagers
Runtime Isolation Within a JVM
JVM Isolation - Resource Consumption Managers
• JVM actions
□ Notify – inform administrator that a threshold has been crossed
□ Slow – reduce partition’s ability to consume resources
□ Fail – reject requests for the resource (file descriptors only)
□ Stop – initiate the shut down sequence for the offending
partition
• “Boundaries” and Fair Share usage patterns
Runtime Isolation Within a JVM
Creating a Resource Manager
Add it to the partition
Example using Limits
Heap Example JVM
Partition 1 Partition 1 Partition 3 Partition 4 <name>heap-level-1</name>
<heap><trigger>
<name>1.25GB</name><value>1250</value>
<action>notify</action></trigger><trigger>
<name>1.5GB</name>
<value>1500</value><action>slow</action>
</trigger><trigger>
<name>2GB</name><value>2000</value><action>stop</action>
</trigger>
</heap>
1.25
1.5
2.0
Declared BoundariesHeap Example JVM
Partition 1 Partition 1 Partition 3 Partition 4 <name>heap-level-1</name>
<heap><trigger>
<name>1.25GB</name><value>1250</value>
<action>notify</action></trigger><trigger>
<name>1.5GB</name>
<value>1500</value><action>slow</action>
</trigger><trigger>
<name>2GB</name><value>2000</value><action>stop</action>
</trigger>
</heap>
1 GB
0.5 GB
0.75 GB
0.5 GB
1.25
1.5
2.0
Declared BoundariesHeap Example JVM
Partition 1 Partition 1 Partition 3 Partition 4 <name>heap-level-1</name>
<heap><trigger>
<name>1.25GB</name><value>1250</value>
<action>notify</action></trigger><trigger>
<name>1.5GB</name>
<value>1500</value><action>slow</action>
</trigger><trigger>
<name>2GB</name><value>2000</value><action>stop</action>
</trigger>
</heap>
1.26 GB
0.5 GB
0.75 GB
0.5 GB
Reaching 1.25 GB so “notify” action
1.25
1.5
2.0
Declared BoundariesHeap Example JVM
Partition 1 Partition 1 Partition 3 Partition 4 <name>heap-level-1</name>
<heap><trigger>
<name>1.25GB</name><value>1250</value>
<action>notify</action></trigger><trigger>
<name>1.5GB</name>
<value>1500</value><action>slow</action>
</trigger><trigger>
<name>2GB</name><value>2000</value><action>stop</action>
</trigger>
</heap>
1.51 GB
0.5 GB
0.75 GB
0.5 GB
Crossing 1.5 GB triggers “slow” action
1.25
1.5
2.0
Declared BoundariesHeap Example JVM
Partition 1 Partition 1 Partition 3 Partition 4 <name>heap-level-1</name>
<heap><trigger>
<name>1.25GB</name><value>1250</value>
<action>notify</action></trigger><trigger>
<name>1.5GB</name>
<value>1500</value><action>slow</action>
</trigger><trigger>
<name>2GB</name><value>2000</value><action>stop</action>
</trigger>
</heap>
2.05 GB
0.5 GB
0.75 GB
0.5 GB
Crossing 2 GB triggers “stop” action
1.25
1.5
2.0
Declared BoundariesHeap Example JVM
Partition 1 Partition 1 Partition 3 Partition 4 <name>heap-level-1</name>
<heap><trigger>
<name>1.25GB</name><value>1250</value>
<action>notify</action></trigger><trigger>
<name>1.5GB</name>
<value>1500</value><action>slow</action>
</trigger><trigger>
<name>2GB</name><value>2000</value><action>stop</action>
</trigger>
</heap>
0.5 GB
0.75 GB
0.5 GB
1.25
1.5
2.0
Fair Share
Heap Example
<name>heap-level-1</name><heap>
<fair-share-constraint><name>FS25</name><value>25</value>
</fair-share-constraint><trigger>
<name>5GB</name><value>5000</value><action>notify</action>
</trigger><trigger>
<name>6.5GB</name><value>6500</value><action>stop</action>
</trigger></heap>
9 GB JVM
Partition 11.75 GB
Partition 21.75 GB
Partition 31.75 GB
Partition 41.75 GB
Steady state:• partitions with fair share of heap• Every partition can get equal if neccesary
Fair ShareHeap Example
<name>heap-level-1</name><heap>
<fair-share-constraint><name>FS25</name><value>25</value>
</fair-share-constraint><trigger>
<name>5GB</name><value>5000</value><action>notify</action>
</trigger><trigger>
<name>6.5GB</name><value>6500</value><action>stop</action>
</trigger></heap>
9 GB JVM
Partition 14.0 GB
Partition 20.5 GB
Partition 3
0.25 GB
Partition 4
0.25 GB
Partition 1 can use as much as it needs
Fair ShareHeap Example
<name>heap-level-1</name><heap>
<fair-share-constraint><name>FS25</name><value>25</value>
</fair-share-constraint><trigger>
<name>5GB</name><value>5000</value><action>notify</action>
</trigger><trigger>
<name>6.5GB</name><value>6500</value><action>stop</action>
</trigger></heap>
9 GB JVM
Partition 14.0 GB
Partition 21.33 GB
Partition 31.33 GB
Partition 41.33 GB
When other partitions begin to compete for heap, WebLogic slows requests in Partition 1
Fair ShareHeap Example
<name>heap-level-1</name><heap>
<fair-share-constraint><name>FS25</name><value>25</value>
</fair-share-constraint><trigger>
<name>5GB</name><value>5000</value><action>notify</action>
</trigger><trigger>
<name>6.5GB</name><value>6500</value><action>stop</action>
</trigger></heap>
9 GB JVM
Partition 12.0 GB
Partition 22.0 GB
Partition 32.0 GB
Partition 42.0 GB
Finally, retained heap will be divided among all partitions
Fair ShareHeap Example
<name>heap-level-1</name><heap>
<fair-share-constraint><name>FS25</name><value>25</value>
</fair-share-constraint><trigger>
<name>5GB</name><value>5000</value><action>notify</action>
</trigger><trigger>
<name>6.5GB</name><value>6500</value><action>stop</action>
</trigger></heap>
9 GB JVM
Partition 16.51 GB
Partition 20.5 GB
Partition 3
0.25 GB
Partition 4
0.25 GB
If Partition 1 starts to run away with heap, it will hit the “stop” trigger
Partition and Resource Group Lifecycle
• Start/Stop□ Partition
□ Partition on a single Managed
Server
□ Resource group
□ Resource group on a single
Managed Server
• Enables refresh for non-dynamic configuration changes
Independence between “tenants”
P1 P2 P3 P4
Stop a partitionStop a single instance
of a partition
Microcontainers•Monoliths broken into small services• Portability between environments• Parity between dev and production• Fast startup/shutdown• Easy scale up• Enable migration to the cloud
Microcontainer Portability
• Enables container-like packaging
• Move applications in between
environments
□ Dev to test, test to production, etc
• Export
□ Service packaging – Captures partition
configuration, application binaries, etc.
• Import
□ Adapt to host environment: virtual target,
security realm, resource management,
Coherence integration
□ Service startup – no need to start dedicated JVM
Export Import of partitions
SVC3
Resources
SVC4
Resources
SVC1
Resources
SVC2
Resources
Pluggable Partition
Pluggable Partition
Pluggable Partition
Pluggable Partition
SVC3
Resources
Oracle Traffic Director
Domain 1 Domain 2Domain 3
Microcontainer Portability
• OTD Load balancer integration
• Automatic service enablement with
Oracle Traffic Director
• Consistent endpoint, independent
scalability
SVC3
Resources
SVC4
Resources
SVC1
Resources
SVC2
Resources
Pluggable Partition
Pluggable Partition
Pluggable Partition
Pluggable Partition
SVC3
Resources
Oracle Traffic Director
Domain 1 Domain 2Domain 3
Export Import of partitions
Application Portability
• Conversion domain partition for existing applications
□ Convert applications to services, migration to WebLogic MT
• Migrate
□ Captures relevant domain configuration, such as applications, system resources
• Import
□ import separate applications into separate partitions, create resource group templates
□ Adjust to host : virtual target, security realm, resource management
□ Load balancer integration(OTD)
Domain to Partition Conversion Tool – D-PCT
Application Portability
• D-PCT to be downloaded separate
• D-PCT supports 10.3.6, 12.1.1, 12.1.2 and 12.1.3 source domains
• Patch is needed for 12.2.1 to use the import domain partition from this
conversion
Domain to Partition Conversion Tool – D-PCT
Microcontainer Portability Live Migration• Move running services (partitions and
resource groups) from one cluster to
another
• Eliminate service downtime for planned
events
• Current possibilities:
□ resource group
□ clusters/servers within a domain
□ Webapps
• Future
□ All application types and protocols, including T3,
RMI, JMS, etc.
□ Migration across domains
Cluster 1
Partition 1
Partition 1Traffic
Director
Partition 1
Cluster 2
WebLogic MultitenantSome comments
• Special option needed for GC – G1 garbage collector + Java8required
• Monitoring can detect port (partitions) failures• Monitoring tools not yet available
• CPU/IO/Memory is topHeap is target• Partition restart not always working• Memory accounting for triggers is loose – GC statistics• Generally works OK• Sometimes partitions failed but JVM still run• SUMMARY – JVM stable after 10s of partitions restarts
Any Questions?Let’s keep in touch!!
The ACE Community
mschildmeijer@qualogy.com
https://community.oracle.com/blogs/mnemonic
https://www.qualogy.com/nl/techblog/author/michel-schildmeijer
Any Questions?
Let’s keep in touch!!
@MNEMONIC01@Qualogy_news
nl.linkedin.com/in/mschldmr