Feed aggregator

[Video 2 of 5] 3 Ways to Connect to Oracle Cloud

Online Apps DBA - 6 hours 40 min ago

There are 3 ways to connect to the Oracle Cloud! Leave a comment below and share how many you know. Note: We’ve covered these 3 ways in our 2nd video part of Networking In Oracle Cloud here: https://k21academy.com/1z093214 There are 3 ways to connect to the Oracle Cloud! Leave a comment below and share how […]

The post [Video 2 of 5] 3 Ways to Connect to Oracle Cloud appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Batch Architecture - Designing Your Cluster - Part 1

Anthony Shorten - Sun, 2019-02-17 18:42

The Batch Architecture for the Oracle Utilities Application Framework is both flexible and powerful. To simplify the configuration and prevent common mistakes the Oracle Utilities Application Framework includes a capability called Batch Edit. This is a command line utility, named bedit.sh, that provides a wizard style capability to build and maintain your configuration. By default the capability is disabled and can be enabled by setting the Enable Batch Edit Functionality to true in the Advanced Configuration settings using the configureEnv.sh script:

$ configureEnv.sh -a ************************************************* * Environment Configuration demo * *************************************************   50. Advanced Environment Miscellaneous Configuration ...        Enable Batch Edit Functionality:                    true ...

Once enabled the capability can be used to build and maintain your batch architecture.

Using Batch Edit

The Batch Edit capability is an interactive utility to build the environment. The capability is easy to use with the following recommendations:

  • Flexible Options. When invoking the command you specify the object type you want to configure (cluster, threadpool or submitter) and any template you want to use. The command options will vary. Use the -h option for a full list.
  • In Built Help. If you do not know what a parameter is about or even the object type. You can use the help <topic> command. For example, using when configuring help threadpoolworker gives you advice about the approaches you can take for threadpools. If you want a list of topics, type help with no topic.
  • Simple Commands. The utility has a simple set of commands within the utility to interact with the settings. For example if you want to set the role within the cluster to say fred you would use the set role fred command within the utility.
  • Save the Configuration. There is a save command to make all changes in the session reflect in the relevant file and conversely if you make a mistake you can exit without saving the session.
  • Informative. It will tell you which file you are editing at the start of the session so you can be sure you are in the right location.

Here is an example of an edit session:

$ bedit.sh -w

Editing file /u01/ugtbk/splapp/standalone/config/threadpoolworker.properties using template /u01/ugtbk/etc/threadpoolworker.be
Includes the following push destinations:
  dir:/u01/ugtbk/etc/conf/tpw

Batch Configuration Editor 4.4.0.0.0_1 [threadpoolworker.properties]
--------------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  daemon (true)
  rmiport (7540)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  jmxstartport (7540)
  l2 (READ_ONLY)
  devmode (false)
  ollogdir (/u02/sploutput/ugtbk)
  ollogretain ()
  thdlogretain ()
  timed_thdlog_dis (false)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (LOCAL)
      threads (0)
> save
Changes saved
Pushing file threadpoolworker.properties to /u01/ugtbk/etc/conf/tpw ...
> exit

Cluster Configuration

The first step in the process is to design your batch cluster. This is the group of servers that will execute batch processes. The Oracle Utilities Application Framework uses a Restricted Use License of Oracle Coherence to cluster batch processes and resources. The use of Oracle Coherence allows you to implement different architectures from simple to complex. Using Batch Edit there are three cluster types supported (you must choose one type per environment).

Cluster Type (template code) Use Cases Comments Single Server (ss) Cluster is restricted to a single host This is useful for non-production environments such as demonstration, development and testing as it is most simple to implement Uni-Cast (wka) The cluster uses unicast protocol with the hosts explicitly named within the cluster that are part of the cluster. This is recommended for sites wanting to lock down a cluster to specific hosts and does not want to use multi-cast protocols. Administrators will have to name the list of hosts, known as Well Known Addresses, that are part of the cluster as part of this configuration Multi-Cast (mc) The cluster uses the multi-cast protocol with a valid multi-cast IP address and port. This is recommended for sites who want a dynamic configuration where threadpools and submitters are accepted on demand. This is the lowest amount of configuration for product clusters as the threadpools can join a cluster from any server with the right configuration dynamically. It is not recommended for sites that do not use the multi-cast protocol. Single Server Configuration

This is the simplest configuration with the cluster restricted to a single host. The cluster configuration is restricted networking wise within the configuration. To use this cluster type simply use the following command and follow the configuration generated for you from the template.

bedit.sh -c -t ss Uni-Cast Configuration

This is a multi-host cluster where the hosts in the configuration are defined explicitly in host and port number combinations. The port number is used for communication to that host in the cluster. This style is useful where the site does not want to use the multi-cast protocol or wants to micro-manage their configuration. To use this cluster type simply use the following command and follow the configuration generated for you from the template.

bedit.sh -c -t wka

You then add each host as a socket using the command:

add socket

This will add a new socket collection in the format socket.<socketnumber>. To set the values use the command:

set socket.<socketnumber> <parameter> <value>

where:

<socketnumber> The host number to edit <parameter> Either wkaaddress (host or IP address of server) and wkaport (port number on that host to use) <value> the value for the parameter. For example: set socket.1 wkaaddress host1

To use this cluster style ensure the following:

  • Use the same port number per host. Try and use the same broadcast port on each host in the cluster. If they are different then the port number in the main file for the machines that are on the cluster has to be changed to define that port.
  • Ensure each host has a copy of the configuration file. When you build the configuration file, ensure the same file is on each of the servers in the cluster (each host will require a copy of the product).
Multi-Cast Configuration

This is the most common multi-host configuration. The idea with this cluster type is that a multi-cast port and IP Address are broadcast across your network per cluster. It requires very little configuration and the threadpools can dynamically connect to that cluster with little configuration. It uses the multi-cast protocol which network administrators either love or hate. The configuration is similar to the Single Server but the cluster settings are actually managed in the installation configuration (ENVIRON.INI) using the COHERENCE_CLUSTER_ADDRESS and COHERENCE_CLUSTER_PORT settings. Refer to the Server Administrator Guide for additional configuration advice.

Cluster Guidelines

When setting up the cluster there are a few guidelines to follow:

  • Use Single Server for Non-Production. Unless you need multi-host clusters, use the Single Server cluster to save configuration effort.
  • Name Your Cluster Uniquely. Ensure your cluster is named appropriately and uniquely per environment to prevent cross environment unintentional clustering.
  • Set a Cluster Type and Stick with it. It is possible to migrate from one cluster type to another (without changing other objects) but to save time it is better to lock in one type and stick with it for the environment.
  • Avoid using Prod Mode. There is a mode in the configuration which is set to dev by default. It is recommended to leave the default for ALL non-production environment to avoid cross cluster issues. The Prod mode is recommended for Production systems only. Note: There are further safeguards built into the Oracle Utilities Application Framework to prevent cross cluster connectivity.

The cluster configuration generates a tangosol-coherence-override.xml configuration file used by Oracle Coherence to manage the cluster.

Cluster Configuration

Now we have the cluster configured, the next step is to design your threadpools to be housed in the cluster. That will be discussed in Part 2 (coming soon).

Microsoft Azure: account upgrade to keep going with free products

Dietrich Schroff - Sun, 2019-02-17 12:39
Today i got an e-mail about my azure account:


 Microsoft provides a little FAQ for this upgrading process:
So there is no option not to upgrade. "you won't be able to access any Azure services" is not really an option...

There 4 upgrade paths i can choose:
To use this account only for my blog, i decided to choose "no technical support", which is perfectly adequate.

After hitting the upgrade button, the notifications bar came up with the following message:
And the subscriptions tab inside the Azure portal shows:



GoldenGate XAG APP VIP Revisited

Michael Dinh - Sat, 2019-02-16 08:17

For unknown reasons, XAG integration for GoldenGate target was eradicated without any trace (I was not able to find any).

When running crsctl at target, no resources were available.

crsctl stat res -t -w 'TYPE = xag.goldengate.type'
crsctl stat res -t|egrep -A2 'dbfs|xag'

Here is an example from source:

$ crsctl stat res -t -w 'TYPE = xag.goldengate.type'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
xag.gg_ue.goldengate
      1        ONLINE  ONLINE       host_source02           STABLE
--------------------------------------------------------------------------------

$ crsctl stat res -t|egrep -A2 'dbfs|xag'
dbfs_mount
               ONLINE  ONLINE       host_source01            STABLE
               ONLINE  ONLINE       host_source02            STABLE
--
ora.dbfs.db
      1        ONLINE  ONLINE       host_source01            Open,STABLE
      2        ONLINE  ONLINE       host_source02            Open,STABLE
--
xag.gg_ue-vip.vip
      1        ONLINE  ONLINE       host_source01            STABLE
xag.gg_ue.goldengate
      1        ONLINE  ONLINE       host_source02            STABLE

Now, I need to setup XAG for target RAC Cluster.

FYI: XAG Bundled Agent was not downloaded, instead used the one available from GRID_HOME.

$ agctl query releaseversion
The Oracle Grid Infrastructure Agents release version is 3.1.0

$ agctl query deployment
The Oracle Grid Infrastructure Agents deployment is bundled

Creating XAG using 2 commands provided different metadata vs 1 command.

The difference between FILESYSTEMS is expected due to change from DBFS to ACFS.

Currently, the change is being implemented at target.

Here is an example using 2 commands:

As Root:
appvipcfg create -network=1 -ip=10.30.91.158 -vipname=xag.gg_ue-vip.vip -user=ggsuser -group=oinstall

As Oracle:
agctl add goldengate gg_ue \
--gg_home /u03/gg/12.2.0 \
--instance_type target \
--nodes target04,target02 \
--vip_name xag.gg_target-vip.vip \
--filesystems ora.acfs_data.acfs_vol.acfs \
--oracle_home /u01/app/oracle/product/12.1.0/client_2

Create source_xag_goldengate.txt target_xag_goldengate.txt using:
crsctl stat res -w "TYPE = xag.goldengate.type" -p
$ diff source_xag_goldengate.txt target_xag_goldengate.txt
< ACL=owner:ggsuser:rwx,pgrp:dba:r-x,other::r--
> ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
---
< AUTO_START=restore
> AUTO_START=never
---
< FILESYSTEMS=dbfs_mount
< GG_HOME=/u03/app/gg/12.2.0
---
> FILESYSTEMS=ora.acfs_data.acfs_vol.acfs
> GG_HOME=/u03/gg/12.2.0
---
< HOSTING_MEMBERS=source01 source02
> HOSTING_MEMBERS=target01 target02
---
< ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
> ORACLE_HOME=/u01/app/oracle/product/12.1.0/client_2
---
< START_DEPENDENCIES=hard(xag.gg_target-vip.vip,dbfs_mount) pullup(xag.gg_target-vip.vip,dbfs_mount)
> START_DEPENDENCIES=
---
< STOP_DEPENDENCIES=hard(xag.gg_target-vip.vip,intermediate:dbfs_mount)
> STOP_DEPENDENCIES=
---
< VIP_CREATED=1
> VIP_CREATED=0

Here is an example using 1 command:

As Root:
agctl add goldengate gg_target
--gg_home /u03/gg/12.2.0
--instance_type target
--nodes target01,target02
-- filesystems ora.acfs_data.acfs_vol.acfs
--oracle_home /u01/app/oracle/product/12.1.0/client_2
--network 1 --ip 10.30.91.158
--user ggsuser
--group dba

$ diff source_xag_goldengate.txt target_xag_goldengate2.txt
< ACL=owner:ggsuser:rwx,pgrp:dba:r-x,other::r--
> ACL=owner:ggsuser:rwx,pgrp:dba:r-x,other::r--
---
< FILESYSTEMS=dbfs_mount
< GG_HOME=/u03/app/gg/12.2.0
---
> FILESYSTEMS=ora.acfs_data.acfs_vol.acfs
> GG_HOME=/u03/gg/12.2.0
---
< HOSTING_MEMBERS=source01 source02
> HOSTING_MEMBERS=target01 target02
---
< ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
> ORACLE_HOME=/u01/app/oracle/product/12.1.0/client_2
---
< START_DEPENDENCIES=hard(xag.gg_target-vip.vip,dbfs_mount) pullup(xag.gg_target-vip.vip,dbfs_mount)
> START_DEPENDENCIES=hard(xag.gg_target-vip.vip,ora.acfs_data.acfs_vol.acfs) pullup(xag.gg_target-vip.vip,ora.acfs_data.acfs_vol.acfs)
---
< STOP_DEPENDENCIES=hard(xag.gg_target-vip.vip,intermediate:dbfs_mount)
> STOP_DEPENDENCIES=hard(xag.gg_target-vip.vip,intermediate:ora.acfs_data.acfs_vol.acfs)

In conclusion, I will be creating XAG using 1 command from now on to provide more metadata info.

PostgreSQL : barman rsync method vs streaming method

Yann Neuhaus - Sat, 2019-02-16 06:16

Barman is a tool to perform backup and recovery for PostgreSQL databases. It can do backup using two methods:
-rsync/ssh
-streaming
In this blog I am going to explain how to use these two methods to backup a PostgreSQL database. Generally it is a good practice to dedicate a server for barman instead of installing it on the database server. My environment is described below
postgreSQL server: dbi-pg-essentials 192.168.22.101 Centos 7
barman server: pgservertools 192.168.22.104 Oracle Linux 7
postgreSQL version: 11.1
barman version: 2.6
The first step is to install barman on the barman server pgservertools

[root@pgservertools network-scripts]# yum install  barman.noarch barman-cli.noarch
Loaded plugins: langpacks, ulninfo
Resolving Dependencies
--> Running transaction check
---> Package barman.noarch 0:2.6-1.rhel7 will be installed
---> Package barman-cli.noarch 0:1.3-1.rhel7.1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package        Arch       Version             Repository                  Size
================================================================================
Installing:
 barman         noarch     2.6-1.rhel7         pgdg10-updates-testing     300 k
 barman-cli     noarch     1.3-1.rhel7.1       pgdg10-updates-testing      14 k

Transaction Summary
================================================================================
...
...
Installed:
  barman.noarch 0:2.6-1.rhel7         barman-cli.noarch 0:1.3-1.rhel7.1

Complete!
[root@pgservertools network-scripts]#

The installation will create a linux user named barman.
As the rsync method need connections without passwords between two servers for the barman user, we have to configure ssh keys
On the server pgservertools (barman server) let’s create keys with the user barman and then copy the public key to the database server dbi-pg-essentials for the user postgres

-bash-4.2$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/var/lib/barman/.ssh/id_rsa):
Created directory '/var/lib/barman/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /var/lib/barman/.ssh/id_rsa.
Your public key has been saved in /var/lib/barman/.ssh/id_rsa.pub.
The key fingerprint is:
f4:b7:6b:6e:38:25:ae:be:7f:9a:34:03:a1:1c:a0:ac barman@pgservertools
The key's randomart image is:
+--[ RSA 2048]----+
|    .            |
| . . .           |
|  o   . o        |
| .   . + o       |
|E     o S . .    |
|         .....   |
|         .++.    |
|         .+++.   |
|       .+++Bo    |
+-----------------+
-bash-4.2$ 


-bash-4.2$ cd .ssh/
-bash-4.2$ ls
id_rsa  id_rsa.pub


-bash-4.2$ ssh-copy-id postgres@dbi-pg-essentials
The authenticity of host 'dbi-pg-essentials (192.168.22.101)' can't be established.
ECDSA key fingerprint is 33:65:38:f4:eb:5b:f4:10:d3:36:7b:ea:5a:70:33:18.
Are you sure you want to continue connecting (yes/no)? yes
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
postgres@dbi-pg-essentials's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'postgres@dbi-pg-essentials'"
and check to make sure that only the key(s) you wanted were added.

If everything is fine, barman should be able to connect to database server as postgres linux user without password

-bash-4.2$ hostname
pgservertools.localdomain
-bash-4.2$ id
uid=994(barman) gid=992(barman) groups=992(barman) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
-bash-4.2$ ssh postgres@dbi-pg-essentials hostname
dbi-pg-essentials
-bash-4.2$

On the database server I also have installed the package barman-cli.noarch which will allow us to use the command barman-wal-archive. We will talk about this later.

[root@dbi-pg-essentials ~]# yum install barman-cli.noarch
Loaded plugins: fastestmirror
dbipgessentials                                          | 3.6 kB     00:00
edb-repos                                                | 2.4 kB     00:00
…
…
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : barman-cli-1.3-1.rhel7.1.noarch                              1/1
  Verifying  : barman-cli-1.3-1.rhel7.1.noarch                              1/1

Installed:
  barman-cli.noarch 0:1.3-1.rhel7.1

Complete!
[root@dbi-pg-essentials ~]# 
rsync backup

As specified earlier in this case the backup is done using rsync. But we have many ways to ship WAL to the barman server. So before talking about barman configuration let’s take a moment to see the WAL shipping
1- With WAL archiving
To better understand I put this picture I got from the barman documentation . As we see backup is done via rsync and the WAL are sent via the barman-wal-archive. This utility comes with barman 2.6.
Before barman 2.6 the rsync command was used to send WAL to barman.
In the documentation we can find that using barman-wal-archive instead of rsync/SSH reduces the risk of data corruption of the shipped WAL file on the Barman server.
rsync_1
The use of barman_wal_archive is done via the postgresql.conf file on the database server. It appears on the value of the parameter archive_command. Below values in my postgresql.conf file.

[postgres@dbi-pg-essentials PG1]$ grep -E  "archive_mode|wal_level|archive_command" postgresql.conf
wal_level = replica                    # minimal, replica, or logical
archive_mode = on               # enables archiving; off, on, or always
archive_command = 'barman-wal-archive 192.168.22.104 pgserver11 %p'             # command to use to archive a logfile segment

Before barman 2.6 we would use following for archive_command to send WAL to barman server

archive_command = 'rsync -a  %p  barman@pgservertools:/var/lib/barman/pgserver11/incoming/%f' # command to use to archive a logfile segment
[postgres@dbi-pg-essentials PG1]$

2- With WAL archiving and WAL streaming
This picture from barman documentation will help to better understand
rsync_2
To use WAL streaming to the barman server, we need pg_receivewal (pg_receivexlog up to PostgreSQL 10) to be installed on the barman server. Be careful of the version of pg_receivewal. In my case I installed the version 11.1 as my PostgreSQL is 11.1

[postgres@pgservertools bin]$ /usr/pgsql-11/bin/pg_receivewal -V
pg_receivewal (PostgreSQL) 11.1
[postgres@pgservertools bin]$

A streaming connection also should be configured and the parameter streaming_archiver should be set to on.
Now to resume let’s say that I want to configure barman with
-rsync method
-using barman_wal_archive and WAL streaming
The barman file configuration /etc/barman.conf should be like

[postgres@pgservertools bin]$ cat /etc/barman.conf | grep -v  ^\;

[barman]
barman_user = barman
configuration_files_directory = /etc/barman.d
barman_home = /var/lib/barman
log_file = /var/log/barman/barman.log
log_level = INFO
compression = gzip
retention_policy = REDUNDANCY 2

[pgserver11]
description = "Main DB Server"
ssh_command = ssh postgres@dbi-pg-essentials
streaming_conninfo=host=192.168.22.101 user=postgres
conninfo=host=192.168.22.101 user=postgres
backup_method = rsync
streaming_archiver = on
archiver = on
path_prefix=/usr/pgsql-11/bin/
[postgres@pgservertools bin]$

And the postgresql.conf should contain following entries

[postgres@dbi-pg-essentials PG1]$ grep -E  "archive_mode|wal_level|archive_command" postgresql.conf
wal_level = replica                    # minimal, replica, or logical
archive_mode = on               # enables archiving; off, on, or always
archive_command = 'barman-wal-archive 192.168.22.104 pgserver11 %p'             # command to use to archive a logfile segment

The first thing is to test that the barman configuration is fine for the PostgreSQL database. The check command should not return any errors. On the barman server with the user barman

-bash-4.2$ barman check pgserver11
Server pgserver11:
        PostgreSQL: OK
        is_superuser: OK
        PostgreSQL streaming: OK
        wal_level: OK
        directories: OK
        retention policy settings: OK
        backup maximum age: OK (no last_backup_maximum_age provided)
        compression settings: OK
        failed backups: OK (there are 0 failed backups)
        minimum redundancy requirements: OK (have 0 backups, expected at least 0)
        ssh: OK (PostgreSQL server)
        not in recovery: OK
        archive_mode: OK
        archive_command: OK
        continuous archiving: OK
        pg_receivexlog: OK
        pg_receivexlog compatible: OK
        receive-wal running: OK
        archiver errors: OK
-bash-4.2$

Now we can launch a backup using the backup command on the barman server with the user barman

-bash-4.2$ barman backup pgserver11
Starting backup using rsync-exclusive method for server pgserver11 in /var/lib/barman/pgserver11/base/20190215T153350
Backup start at LSN: 0/2E000060 (00000005000000000000002E, 00000060)
This is the first backup for server pgserver11
Starting backup copy via rsync/SSH for 20190215T153350
Copy done (time: 12 seconds)
This is the first backup for server pgserver11
WAL segments preceding the current backup have been found:
        00000005000000000000002D from server pgserver11 has been removed
Asking PostgreSQL server to finalize the backup.
Backup size: 74.1 MiB
Backup end at LSN: 0/2E000168 (00000005000000000000002E, 00000168)
Backup completed (start time: 2019-02-15 15:33:52.392144, elapsed time: 15 seconds)
Processing xlog segments from file archival for pgserver11
        00000005000000000000002E
        00000005000000000000002E.00000060.backup
-bash-4.2$

We can list the existing backup. On the barman server with the barman user

-bash-4.2$ barman list-backup pgserver11
pgserver11 20190215T153350 - Fri Feb 15 15:34:08 2019 - Size: 74.1 MiB - WAL Size: 0 B
-bash-4.2$
Streaming backup

Since the version 2.0, barman supports streaming replication for backup. This method uses the native pg_basebackup
1- Streaming-only backup
This picture is from the barman documentation may help
stream_1
As we can see, In this case backup are done via streaming. WAL are also using streaming protocol.
2- WAL archiving and WAL streaming
Once again following picture may help
stream_2
In this case we configure standard archiving as well to implement a more robust architecture

For example to implement a barman configuration with streaming backup WAL streaming and WAL archiving, the /etc/barman.conf should be like

[postgres@pgservertools bin]$ cat /etc/barman.conf | grep -v  ^\;

[barman]
barman_user = barman
configuration_files_directory = /etc/barman.d
barman_home = /var/lib/barman
log_file = /var/log/barman/barman.log
log_level = INFO
compression = gzip
retention_policy = REDUNDANCY 2

[pgserver11]
description = "Main DB Server"
ssh_command = ssh postgres@dbi-pg-essentials
streaming_conninfo=host=192.168.22.101 user=postgres
conninfo=host=192.168.22.101 user=postgres
backup_method = postgres
streaming_archiver = on
archiver = on
slot_name=barman
path_prefix=/usr/pgsql-11/bin/

and the postgressql.conf

[postgres@dbi-pg-essentials PG1]$ grep -E  "archive_mode|wal_level|archive_command" postgresql.conf
wal_level = replica                    # minimal, replica, or logical
archive_mode = on               # enables archiving; off, on, or always
archive_command = 'barman-wal-archive 192.168.22.104 pgserver11 %p'             # command to use to archive a logfile segment

So the check should not return any errors

-bash-4.2$ barman check pgserver11
Server pgserver11:
        PostgreSQL: OK
        is_superuser: OK
        PostgreSQL streaming: OK
        wal_level: OK
        replication slot: OK
        directories: OK
        retention policy settings: OK
        backup maximum age: OK (no last_backup_maximum_age provided)
        compression settings: OK
        failed backups: OK (there are 0 failed backups)
        minimum redundancy requirements: OK (have 1 backups, expected at least 0)
        pg_basebackup: OK
        pg_basebackup compatible: OK
        pg_basebackup supports tablespaces mapping: OK
        archive_mode: OK
        archive_command: OK
        continuous archiving: OK
        pg_receivexlog: OK
        pg_receivexlog compatible: OK
        receive-wal running: OK
        archiver errors: OK
-bash-4.2$

And we launch a backup, we can see that barman is using pg_basebackup

-bash-4.2$ barman backup pgserver11
Starting backup using postgres method for server pgserver11 in /var/lib/barman/pgserver11/base/20190215T160757
Backup start at LSN: 0/2F0001A8 (00000005000000000000002F, 000001A8)
Starting backup copy via pg_basebackup for 20190215T160757
Copy done (time: 11 seconds)
Finalising the backup.
Backup size: 73.9 MiB
Backup end at LSN: 0/31000060 (000000050000000000000031, 00000060)
Backup completed (start time: 2019-02-15 16:07:57.919595, elapsed time: 11 seconds)
Processing xlog segments from file archival for pgserver11
        000000050000000000000030
        000000050000000000000031
Processing xlog segments from streaming for pgserver11
        000000050000000000000030
-bash-4.2$

Available backups are now

-bash-4.2$ barman list-backup pgserver11
pgserver11 20190215T160757 - Fri Feb 15 16:08:09 2019 - Size: 73.9 MiB - WAL Size: 0 B
pgserver11 20190215T153350 - Fri Feb 15 15:34:08 2019 - Size: 74.1 MiB - WAL Size: 48.3 KiB

To restore with barman, we use the command recover. For example the following command will restore the backup 20190215T160757 on server dbi-pg-essentials_3 in the directory /u02/pgdata/PGRESTORE

-bash-4.2$ barman recover --remote-ssh-command "ssh postgres@dbi-pg-essentials_3" pgserver11 20190215T160757 /u02/pgdata/PGRESTORE
Conclusion

In this blog I have tried to explain different scenarios for using barman. We talked about rsync method and streaming methods for backups. Before starting the setup a choice must be done. One can check documentation for more information

Ref: http://docs.pgbarman.org/

Cet article PostgreSQL : barman rsync method vs streaming method est apparu en premier sur Blog dbi services.

[Blog] [1Z0-932] Oracle Cloud Infrastructure (OCI): Week 1 Learning Path Cloud Concepts & IAM Concepts

Online Apps DBA - Sat, 2019-02-16 04:17

In technical terms, Cloud is defined as computing services including Storage, Server, Application, Networking, Database etc… So if you are a beginner and want to start your journey in Oracle Cloud Infrastructure then Visit: https://k21academy.com/oci28 and consider our new blog Covering: ✔ What is Cloud & Features of Cloud Computing? ✔ Cloud Service & Deployment […]

The post [Blog] [1Z0-932] Oracle Cloud Infrastructure (OCI): Week 1 Learning Path Cloud Concepts & IAM Concepts appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[BLOG] GoldenGate: Upgrade Classic Capture to Integrated Capture

Online Apps DBA - Sat, 2019-02-16 02:29

Are you facing trouble in upgrading classic capture mode to integrated capture in Oracle GoldenGate? If yes. then visit: https://k21academy.com/goldengate21 to know: ✔The difference between the capture modes in Oracle GoldenGate ✔How to Upgrade Classic to Integrated Capture and much more… Are you facing trouble in upgrading classic capture mode to integrated capture in Oracle […]

The post [BLOG] GoldenGate: Upgrade Classic Capture to Integrated Capture appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Another bug with lateral

XTended Oracle SQL - Fri, 2019-02-15 18:32

Compare the results of the following query with the clause “fetch first 2 rows only”

with 
 t1(a) as (select * from table(odcinumberlist(1,3)))
,t2(a,b) as (select * from table(ku$_objnumpairlist(
                                 sys.ku$_objnumpair(1,1),
                                 sys.ku$_objnumpair(1,2),
                                 sys.ku$_objnumpair(1,3),
                                 sys.ku$_objnumpair(3,1),
                                 sys.ku$_objnumpair(3,2),
                                 sys.ku$_objnumpair(3,3)
                                 )))
,t(id) as (select * from table(odcinumberlist(1,2,3,4,5,6,7)))
select
  *
from t,
     lateral(select t1.a,t2.b
             from t1,t2 
             where t1.a = t2.a 
               and t1.a = t.id
             order by t2.b
             fetch first 2 rows only
             )(+)
order by id;

        ID          A          B
---------- ---------- ----------
         1          1          1
         1          3          1
         2          1          1
         2          3          1
         3          1          1
         3          3          1
         4          1          1
         4          3          1
         5          1          1
         5          3          1
         6          1          1
         6          3          1
         7          1          1
         7          3          1

14 rows selected.

with this one (i’ve just commented out the line with “fetch-first-rows-only”:

with 
 t1(a) as (select * from table(odcinumberlist(1,3)))
,t2(a,b) as (select * from table(ku$_objnumpairlist(
                                 sys.ku$_objnumpair(1,1),
                                 sys.ku$_objnumpair(1,2),
                                 sys.ku$_objnumpair(1,3),
                                 sys.ku$_objnumpair(3,1),
                                 sys.ku$_objnumpair(3,2),
                                 sys.ku$_objnumpair(3,3)
                                 )))
,t(id) as (select * from table(odcinumberlist(1,2,3,4,5,6,7)))
select
  *
from t,
     lateral(select t1.a,t2.b
             from t1,t2 
             where t1.a = t2.a 
               and t1.a = t.id
             order by t2.b
--             fetch first 2 rows only
             )(+)
order by id;

        ID          A          B
---------- ---------- ----------
         1          1          2
         1          1          3
         1          1          1
         2
         3          3          2
         3          3          1
         3          3          3
         4
         5
         6
         7

11 rows selected.

Obviously, the first query should return less rows than second one, but we can see that it returned more rows and join predicate “and t1.a = t.id” was ignored, because A and B are not empty and “A” is not equal to t.ID.

Categories: Development

Error (CLSD|CLSU-00100|CLSU-00103: error location: sclsdgcwd2|CLSD00183) Running ggsci

Michael Dinh - Fri, 2019-02-15 18:25

Rant: Any application requiring strace for a simple problem to determine root cause is poorly written.

Oracle blog – Amardeep Sidhu January 12, 2019 Error while running ggsci

The blog above was a great help.

$ ./ggsci 

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.170919 OGGCORE_12.2.0.1.0OGGBP_PLATFORMS_171030.0908_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Oct 30 2017 20:49:22
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2017, Oracle and/or its affiliates. All rights reserved.


2019-02-15 18:04:14.827 
CLSD: An error occurred while attempting to generate a full name. Logging may not be active for this process
Additional diagnostics: CLSU-00100: operating system function: sclsdgcwd failed with error data: -1
CLSU-00103: error location: sclsdgcwd2
(:CLSD00183:)

Results from strace.

strace ./ggsci 
mkdir("/u01/app/oracle/product/12.1.0/client_2/log", 01777) = -1 EACCES (Permission denied)

I did take a different path for resolution.

gid is different for ggsuser and oracle

uid=1521(ggsuser) gid=1500(dba)      groups=1500(dba),1501(oinstall)
uid=1500(oracle)  gid=1501(oinstall) groups=1501(oinstall),1500(dba)

As root, chmod 775 -R /u01/app resolved the issue.

# cd /u01/
# chmod 775 -R app/

However, this does not explain why it was working before adding GoldenGate to CRS.

# agctl add goldengate GoldenGate_instance \
--instance_type target \
--oracle_home /u01/app/oracle/product/12.1.0/client_2 \
--nodes node1,node2 \
--network 1 --ip 10.30.91.158 \
--user ggsuser \
--group dba \
--filesystems ora.acfs_data.acfs_vol.acfs \
--gg_home /u03/gg/12.2.0

Lateral view decorrelation(VW_DCL) causes wrong results with rownum

XTended Oracle SQL - Fri, 2019-02-15 17:45

Everyone knows that rownum in inline views blocks many query transformations, for example pushing/pulling predicates, scalar subquery unnesting, etc, and many people use it for such purposes as a workaround to avoid unwanted transformations(or even CBO bugs).

Obviously, the main reason of that is different calculation of rownum:

If we pull the predicate “column_value = 3” from the following query to higher level

select * 
from (select * from table(odcinumberlist(1,1,1,2,2,2,3,3,3)) order by 1)
where rownum <= 2
  and column_value = 3;


COLUMN_VALUE
------------
           3
           3

we will get different results:

select * 
from (select *
      from (select * from table(odcinumberlist(1,1,1,2,2,2,3,3,3)) order by 1)
      where rownum <= 2
     )
where column_value = 3;

no rows selected

Doc ID 62340.1

[collapse]

But we recently encountered a bug with it: lateral view with ROWNUM returns wrong results in case of lateral view decorrelation.
Compare results of this query with and without no_decorrelation hint:

with 
 t1(a) as (select * from table(odcinumberlist(1,3)))
,t2(b) as (select * from table(odcinumberlist(1,1,3,3)))
,t(id) as (select * from table(odcinumberlist(1,2,3)))
select
  *
from t,
     lateral(select/*+ no_decorrelate */ rownum rn 
             from t1,t2 
             where t1.a=t2.b and t1.a = t.id
            )(+)
order by 1,2;

        ID         RN
---------- ----------
         1          1
         1          2
         2
         3          1
         3          2
with 
 t1(a) as (select * from table(odcinumberlist(1,3)))
,t2(b) as (select * from table(odcinumberlist(1,1,3,3)))
,t(id) as (select * from table(odcinumberlist(1,2,3)))
select
  *
from t,
     lateral(select rownum rn 
             from t1,t2 
             where t1.a=t2.b and t1.a = t.id
            )(+)
order by 1,2;

        ID         RN
---------- ----------
         1          1
         1          2
         2
         3          3
         3          4

Of course, we can draw conclusions even from these results: we can see that in case of decorrelation(query with hint) rownum was calculated before the join. But to be sure we can check optimizer’s trace 10053:

Final query after transformations:

******* UNPARSED QUERY IS *******
SELECT VALUE(KOKBF$2) "ID", "VW_DCL_76980902"."RN" "RN"
  FROM TABLE("ODCINUMBERLIST"(1, 2, 3)) "KOKBF$2",
       (SELECT ROWNUM "RN_0", VALUE(KOKBF$0) "ITEM_3"
          FROM TABLE("ODCINUMBERLIST"(1, 3)) "KOKBF$0",
               TABLE("ODCINUMBERLIST"(1, 1, 3, 3)) "KOKBF$1"
         WHERE VALUE(KOKBF$0) = VALUE(KOKBF$1)
        ) "VW_DCL_76980902"
 WHERE "VW_DCL_76980902"."ITEM_3"(+) = VALUE(KOKBF$2)
 ORDER BY VALUE(KOKBF$2), "VW_DCL_76980902"."RN"

*************************

[collapse]

I’ll modify it a bit just to make it more readable:
we can see that

select
  *
from t,
     lateral(select rownum rn 
             from t1,t2 
             where t1.a=t2.b and t1.a = t.id)(+)
order by 1,2;

was transformed to

select
  t.id, dcl.rn
from t,
     (select rownum rn 
      from t1,t2 
      where t1.a=t2.b) dcl
where dcl.a(+) = t.id
order by 1,2;

And it confirms that rownum was calculated on the different dataset (t1-t2 join) without join filter by table t.
I created SR with Severity 1 (SR #3-19117219271) more than a month ago, but unfortunately Oracle development do not want to fix this bug and moreover they say that is not a bug. So I think this is a dangerous precedent and probably soon we will not be able to be sure in the calculation of rownum and old fixes…

Categories: Development

HEUG Alliance 2019

Jim Marion - Fri, 2019-02-15 13:50
With the HEUG Alliance 2019 conference starting in a few weeks, it is time to finalize our session schedules. Reviewing the agenda, I see many great education sessions from partners such as Presence of IT, SpearMCAppsian, and Mutara Inc as well as many, many customer sessions covering important topics including security, user experience, integration, tools, add-on products and so on. This is clearly an Alliance we don't want to miss! On Monday I will be presenting new PeopleTools Tips and Techniques and then on Wednesday, I am leading the workshop PeopleSoft Fluid: Zero to Hero in an Afternoon. Session details:
I look forward to seeing you at Alliance 2019!

Now Available: Oracle Management Cloud for E-Business Suite

Steven Chan - Fri, 2019-02-15 12:02

On behalf the our colleagues in the Oracle Managed Cloud development team, we are pleased to announce the release of Oracle Management Cloud (OMC) for Oracle E-Business Suite for end-to-end, cloud-based monitoring.

OMC offers a simple, single-click discovery interface that you can use to obtain a fast, strategic look at the entire application environment:

All Oracle E-Business Suite administrators will benefit from OMC's detailed health and performance views of key EBS components such as:

  • Concurrent Processing
  • Forms Services
  • Workflow
  • WebLogic Server
  • Database

Oracle Management Cloud utilizes machine learning to track and provide alerts on key performance indicators such as concurrent request volume and runtime characteristics, and pending Forms sessions.

In addition to these various EBS components, OMC can also monitor a wide range of infrastructure hardware such as load balancers, switches, firewalls, and storage arrays.

References Related Articles
Categories: APPS Blogs

New search function for Oracle database features

Yann Neuhaus - Fri, 2019-02-15 10:25

Oracle released a new way to search for database features, and it is …

..really great.
Try out the cool new search application for Oracle database features.

What is special about? The new site gives you a very smart overview of the database features and in which version they are available (at least from 11-19)

As example, when you search for “compression” it shows you this output:

With a click on the feature you are interessted in you can see a short description and jump directly into the documentation

Oh and when somebody from Oracle is reading this:
Please add also the license information for all the features: Thank you :-)

Playing around with the new site, I already found some features I did not know that they exist.
Or did you know, there is an ASM Cloning feature starting with Oracle 18c and Flex Disk groups?

Have a nice weekend.

P.S.
Thanks to
Thomas Teske
from Oracle who showed me this feature!

Cet article New search function for Oracle database features est apparu en premier sur Blog dbi services.

Efficient way to fetch data using rownumber from a billion record table??

Tom Kyte - Fri, 2019-02-15 10:06
I'm trying to fetch required data from a billion record table using the row number approch (i.e, select "ingest_ts_utc" from (SELECT to_char(sys_extract_utc(systimestamp), 'YYYY-MM-DD HH24:MI:SS.FF') as "ingest_ts_utc" ,ROWNUM as rno from XYZ.ABC ) ...
Categories: DBA Blogs

using sql loader for a table where you only have insert privileges via a non default role

Tom Kyte - Fri, 2019-02-15 10:06
TeamTOM, I'm trying to use Sql Loader to load a text file into a normal Oracle table (as opposed to an external one). Unfortunately I only have insert rights to the table via a non default role. If I were just doing a simple insert statement in sq...
Categories: DBA Blogs

ADF Performance Improvement with Nginx Compression

Andrejus Baranovski - Fri, 2019-02-15 08:54
We are using Nginx web server for Oracle ADF WorkBetter hosted demo hosted on DigitalOcean cloud server. Nginx helps to serve web application content fast and offer improved performance. One of the important tuning options - content compression, Nginx does this job well and is simple to setup.

Content compression doesn't provide direct runtime performance, a browser would run the same code, doesn't matter it was compressed or not. But it brings improved perceived performance (which is very important), network time is way faster, because of reduced content size. Oracle ADF is a server-side framework, each request would bring content from the server - faster this content comes, means better application performance.

1. Content Compression = OFF

Let see stats, when no content compression applied (using our Oracle ADF WorkBetter hosted demo).

Page load size is 2.69 MB transferred. Finish time 1.55 s:


Navigation to the employee section generates 165.76 KB and finish time 924 ms:


Navigation to employee compensation generates 46.19 KB and finish time 494 ms:


2. Nginx compression

Compression is simple to setup in Nginx. Gzip settings are set in nginx.conf, make sure to list all content types which must be supported for compression. Restart nginx process after new settings are saved in nginx.conf:


3. Content Compression = ON

Page load size is 733.84 KB transferred. Finish time 1.48 s:


Navigation to the employee section generates 72.75 KB and finish time 917 ms:


Navigation to employee compensation generates 7.59 KB and finish time 498 ms:

[BLOG] Oracle GoldenGate 12c Overview & Components

Online Apps DBA - Fri, 2019-02-15 04:24

Are you a Beginner who has just started learning Oracle GoldenGate? If yes, then visit: https://k21academy.com/goldengate11 and learn: ✔What is Extract Process and how it can be configured ✔What is Replicat and Collector ✔All other different components of GoldenGate and much more… Are you a Beginner who has just started learning Oracle GoldenGate? If yes, […]

The post [BLOG] Oracle GoldenGate 12c Overview & Components appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

A few scripting languages for Documentum

Yann Neuhaus - Fri, 2019-02-15 03:25

Beside the obsolete dmbasic, the autistic dmawk, the verbose java with the DfCs, the limited iapi (for API) and idql (for DQL) command-line tools, Documentum does not offer any scripting language for the administrator and the out-of-the-box experience is quite frustrating in this respect. It has been so even before the java trend so it is not a maneuver to force the use of the DfCs or to rely on it for administrative tasks. It looks more like an oversight or like this was considered as a low-priority need.
Of course, people didn’t stop at this situation and developed their own bindings with their preferred scripting language. I found db:Documentum for perl, yours truly’s DctmAPI.py for python (refer to the article here), dmgawk for gawk (see here), and of course all the JVM-based languages that leverage the DfCs such as groovy, beanshell, jython, jruby, etc… Such JVM-based scripting languages actually only need to import the DfCs library and off they go for the next killer script. In this article, I’ll show how to set up the binding for a few of those languages under the linux O/S.

db::Documentum

This is a perl v5 module that permits access to the Documentum api from the perl interpreter. It was developed by M. Scott Roth, see here, originally only for the Windows O/S and EDMS v3.1.5. Thanks to other contributors, it is now compilable under several flavors of Unices, including Linux. It is downloadable from here.
You’ll need the GNU C compiler to generate the module. Here is a detailed, step by step description of the installation procedure.

# download the archive Db-Documentum-1.64.tar.gz from here http://www.perl.com/CPAN/modules/by-module/Db/
# decompress it in, say, db::Documentum
tar -zxvf Db-Documentum-1.64.tar.gz
 
# move to the newly created directory Db-Documentum-1.64;
cd Db-Documentum-1.64
 
# prepare the following needed paths;
# DM_HOME
# path to the Documentum installation, e.g. /home/dmadmin/documentum
# DM_LIB
# path to the Documentum libdmcl.so library, e.g. ${DM_HOME}/product/7.3/bin
# note: there is also the obsolescent libdmcl40.so but I've encountered problems with it, mostly "Segmentation fault (core dumped)" crashes, so use the JNI-based libdmcl.so instead; it starts more slowly as it uses java but it is more reliable and is still supported;
# DM_INCLUDE
# path to the include file dmapp.h, e.g. ${DM_HOME}/share/sdk/include
 
# edit the linux section in Makefile.PL and provide the above paths;
# also, move up the $DM_LIB initialization before the dmcl.so test and comment the line beginning with $DM_CLIENT_LIBS =
# here is how that section looks like after editing it:

elsif ( $OS =~ /linux/i ) {
 
# The path to your Documentum client installation.
$DM_HOME = '/home/dmadmin/documentum';
 
# This is kinda a gottcha, the Linux stuff is in unix/linux
# You may have to tweak these.
 
# Path to documentum client libraries.
#$DM_LIB = "$DM_HOME/unix/linux";
$DM_LIB = "$DM_HOME/product/7.3/bin";
 
# dmcl.so file
if (! -e "$DM_LIB/libdmcl.so") {
warn "\n*** WARNING *** Could not find $DM_LIB/libdmcl.so.\nThe module will not make without " .
"libdmcl.so.\n";
}
 
# Path to directory where dmapp.h lives.
#$DM_INCLUDE = "-I/documentum/share/sdk/include/";
$DM_INCLUDE = "-I$DM_HOME/share/sdk/include/";
 
#$DM_CLIENT_LIBS = "-L$DM_LIB -ldmapi -ldmupper -ldmlower -ldmcommon -ldmupper -lcompat";
$DM_RPC_LIBS = "-L$DM_LIB -lnwrpc -lnwstcp";
$OS_LIBS = "-lsocket -lnsl -lintl";
$CPP_LIBS = "-lC";
$LD_LIBS = "-ldl";
$CPP_INC = "";
$CCFLAGS = "";
}

 
# execute the Makefile.PL;
perl Makefile.PL
 
# if the error below occurs, you must install the perl-devel module using the native package deployment tool for your distribution,
# e.g. sudo yum install perl-devel for centos:
# Can't locate ExtUtils/MakeMaker.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at Makefile.PL line 1.
#BEGIN failed--compilation aborted at Makefile.PL line 1.
 
# a Makefile file has been generated; correct the 2 lines below as showed;
EXTRALIBS = -L/home/dmadmin/documentum/product/7.3/bin -ldmcl
LDLOADLIBS = -L/home/dmadmin/documentum/product/7.3/bin -ldmcl -lc
 
# use the newly produced Makefile;
make
 
# run some tests to check the new module;
make test
 
# the test completes successfully but, sometimes, it is followed by SIGSEGV in the JVM;
# as it occurs at program termination, it can be ignored;
 
# install the new perl module system-wide;
sudo make install

Now that we have the module, let’s use it in a simple test case: dump of all the dm_sysobject linked in cabinet /dmadmin (its id is 0c00c35080000105) in the out-of-the box dmtest repository.

vi cat my-test.pl 
#!/usr/bin/perl

use Db::Documentum qw(:all);
use Db::Documentum::Tools qw(:all);

# print version
Db::Documentum::version;

$docbase = "dmtest";
$username = "dmadmin";
$password = "dmadmin";

# connect;
$result = dm_Connect($docbase, $username, $password) || die("could not connect in " . $docbase . " as " . $username . " with password " . $password);

# run the query;
$status = dmAPIExec("execquery,c,,select r_object_id, r_object_type, object_name from dm_sysobject where folder(ID('0c00c35080000105'))");
if (1 != $status) {
   $err_mess = dmAPIGet("getmessage,c");
   print $err_mess;
   die();
}
$query_id = dmAPIGet("getlastcoll,c");
printf "%-16s  %-20s  %s\n", "r_object_id", "r_object_type", "object_name";
while (dmAPIExec("next,c," . $query_id)) {
   $r_object_id = dmAPIGet("get,c," . $query_id . ",r_object_id");
   $r_object_type = dmAPIGet("get,c," . $query_id . ",r_object_type");
   $object_name = dmAPIGet("get,c," . $query_id . ",object_name");
   printf "%16s  %-20s  %s\n", $r_object_id, $r_object_type, $object_name;
}
dmAPIExec("close,c," . $query_id);

# disconnect;
dmAPIExec("disconnect,c");
exit;

The script is very trivial and needs little explanation. Note the new functions dm_Connect, dmAPIExec and dmAPIGet. dmAPISet, dmAPIInit and dmAPIDeInit are also available but the last two don’t need to be invoked explicitly for they are automatically at module load-time.
Example of execution:

perl my-test.pl
 
Perl version: 5.016003
Db::Documentum version: 1.64
DMCL version: 7.3.0000.0205
 
r_object_id r_object_type object_name
0800c3508000019b dm_job dm_PropagateClientRights
0800c3508000019c dm_job dm_PostUpgradeAction
0800c35080000408 dmc_wfsd_type_info integer
0800c35080000409 dmc_wfsd_type_info boolean
0800c3508000040a dmc_wfsd_type_info double
0800c3508000040b dmc_wfsd_type_info string
0800c3508000040c dmc_wfsd_type_info date
0800c3508000040d dmc_wfsd_type_info repeating integer
0800c3508000040e dmc_wfsd_type_info repeating boolean
0800c3508000040f dmc_wfsd_type_info repeating double
0800c35080000410 dmc_wfsd_type_info repeating string
0800c35080000411 dmc_wfsd_type_info repeating date
0800c35080000426 dm_sysobject dm_indexAgentAcquireLock
0800c35080000587 dm_client_rights dfc_localhost_c0XP4a
0800c35080001065 dm_jms_config JMS dmtest:9080 for dmtest.dmtest
0800c35080001066 dm_jms_config JMS localhost.localdomain:9080 for dmtest.dmtest
0b00c35080000233 dm_folder Workspace Customizations
1700c3508000015d dm_outputdevice Default LP Printer
3a00c3508000013f dm_location storage_01
3a00c35080000140 dm_location common
3a00c35080000141 dm_location events
3a00c35080000142 dm_location log
3a00c35080000143 dm_location config
3a00c35080000144 dm_location dm_dba
3a00c35080000145 dm_location auth_plugin
3a00c35080000146 dm_location ldapcertdb_loc
3a00c35080000147 dm_location temp
3a00c35080000148 dm_location dm_ca_store_fetch_location
3a00c35080000153 dm_location convert
3a00c35080000154 dm_location dsearch
3a00c35080000155 dm_location nls_chartrans
3a00c35080000156 dm_location check_signature
3a00c35080000157 dm_location validate_user
3a00c35080000158 dm_location assume_user
3a00c35080000159 dm_location secure_common_area_writer
3a00c3508000015a dm_location change_password_local
3a00c3508000015b dm_location thumbnail_storage_01
3a00c3508000015c dm_location streaming_storage_01
3a00c35080000226 dm_location replicate_location
3a00c35080000227 dm_location replica_storage_01
3e00c35080000149 dm_mount_point share
6700c35080000100 dm_plugin CSEC Plugin
6700c35080000101 dm_plugin Snaplock Connector

Now, the power of perl and its more than 25’000 modules are at our fingertips to help us tackle those hairy administrative tasks !

groovy

Being a JVM-based language, groovy runs on top of a JVM, and therefore benefits of all its advantages such as automatic garbage collection (although this is not an exclusivity of java) and portability (ditto), and can tap into the uncountable existing JAVA libraries (ditto).
groovy is used in Documentum’s xPlore.
groovy is a powerful, yet easy to learn, programing language still actively maintained by the Apache foundation, cf. here. Similar to java but without its verbosity, it should instantly appeal to java developers doing Documentum administrative tasks.
groovy comes with groovysh, a comfortable and powerful interactive shell for trying out statements and experimenting with the language.
By importing the DfCs, we can use groovy to access Documentum repositories. Knowledge of the DfCs are required of course.
To install groovy, use the distribution’s package manager; e.g. on my Ubuntu, I’ve used:

sudo apt-get install groovy

while on Centos, the following command will do it:

sudo yum install groovy

To test it, let’s use the same program as for perl, but rewritten a la groovy:

#! /usr/bin/groovy

import System.*;
import java.io.*;

import com.documentum.fc.client.*;
import com.documentum.fc.common.*;

   static void main(String[] args) {
      docbroker = "dmtest";
      docbase = " dmtest";
      username = "dmadmin";
      password = "dmadmin";
   
      println("attempting to connect to " + docbase + " as " + username + "/" + password + " via docbroker " + docbroker);
   
      try {
         client = DfClient.getLocalClient();
      
         config = client.getClientConfig();
         config.setString ("primary_host", docbroker);
      
         logInfo = new DfLoginInfo();
         logInfo.setUser(username);
         logInfo.setPassword(password);
         docbase_session = client.newSession(docbase, logInfo);
      
         if (docbase_session != null) {
            println("Got a session");
      
            // do something in the session;
            folderId = new DfId("0c00c35080000105");
            folder = docbase_session.getObject(folderId);
            attrList = "r_object_id,r_object_type,object_name";
            coll = folder.getContents(attrList);
      
            while (coll.next())
               System.out.printf("ObjectId=%-16s r_object_type=%-20s ObjectName=%s\n",
                                 coll.getString("r_object_id"),
                                 coll.getString("r_object_type"),
                                 coll.getString("object_name"));
            println("Finished");
            docbase_session.disconnect();
         }
         else
            println("Didn't get a session");
      }
      catch (e) {
         println("Exception was: " + e);
      }
   }

Lines 6 & 7 import the DfC so don’t forget to add them to the CLASSPATH; normally they are because the execution environment is a Documentum client, e.g.:

export JAVA_HOME=/home/dmadmin/documentum/shared/java64/1.8.0_77
export CLASSPATH=/home/dmadmin/documentum/shared/dfc/dfc.jar
export PATH=$JAVA_HOME/bin:$PATH

Line 15 & 38 show that besides its own built-in functions, groovy can also use equivalent functions from the java libraries.
To invoke the script:

groovy tgroovy.gry
# or make it executable and call it:
chmod +x tgroovy.gry
./tgroovy.gry

Here is its output:

attempting to connect to dmtest as dmadmin/dmadmin via docbroker dmtest
Got a session
ObjectId=0800c3508000019b r_object_type=dm_job ObjectName=dm_PropagateClientRights
ObjectId=0800c3508000019c r_object_type=dm_job ObjectName=dm_PostUpgradeAction
ObjectId=0800c35080000408 r_object_type=dmc_wfsd_type_info ObjectName=integer
ObjectId=0800c35080000409 r_object_type=dmc_wfsd_type_info ObjectName=boolean
ObjectId=0800c3508000040a r_object_type=dmc_wfsd_type_info ObjectName=double
...
ObjectId=3a00c35080000227 r_object_type=dm_location ObjectName=replica_storage_01
ObjectId=3e00c35080000149 r_object_type=dm_mount_point ObjectName=share
ObjectId=6700c35080000100 r_object_type=dm_plugin ObjectName=CSEC Plugin
ObjectId=6700c35080000101 r_object_type=dm_plugin ObjectName=Snaplock Connector
Finished

jython

jython is a python implementation written in java, see here.
A such, it offers most of the features of the powerful python language although it stays behind the latest python version (v2.5.3 vs. 3.7).
Like java, groovy, jruby, scala, etc …, jython runs inside a JVM and can use all the java libraries such as the DfCs and become a Documentum client with no changes except adding the DfCs to the $CLASSPATH.
jython appeals especially to people who already know python; like for groovy, a basic level knowledge of the DfCs is required.
To install jython, use your distribution’s package manager, e.g.

# for ubuntu:
sudo apt-get install jython

Make sure the DfCs are present in $CLASSPATH, otherwise add them:

export CLASSPATH=/home/dmadmin/documentum/shared/dfc/dfc.jar:

When runing the test script below, the DfCs complain about a bad date format:

20:42:05,482 ERROR [File Watcher] com.documentum.fc.common.DfPreferences - [DFC_PREFERENCE_BAD_VALUE] Bad value for preference "dfc.date_format", value="M/d/yyyy, h:mm:ss a"
com.documentum.fc.common.DfException: Illegal syntax found in the date format 'M/d/yyyy, h:mm:ss a'. The default localized short date format will be used.
at com.documentum.fc.common.DfException.newIllegalDateFormatException(DfException.java:109)

Evidently, they are unhappy with the default date format. The work-around is to force one in the dfc.properties file by adding the line below (choose a date format that best suits you):

dfc.date_format=dd.MM.yyyy HH:mm:ss

Since the error did not occur with groovy (nor in the provided JNI-enabled command-line tools such as iapi, idql and dmawk), it is not the DfCs per se that have this problem but the combination of java + jython + DfCs.
Here comes the test script:

#!/usr/bin/env jython

# install jython via your O/S package manager;
# export CLASSPATH=/home/dmadmin/documentum/shared/dfc/dfc.jar:$CLASSPATH
# edit the documentum/shared/config/dfc.properties and add a dfc.date property, e.g.:
# dfc.date_format=dd.MM.yyyy HH:mm:ss
# execute:
#   jython test.jy
# or:
#   chmod +x test.jy; ./test.jy
# can also be execute interactively as follows:
# start jython:
#    jython
# call the test script;
#    execfile("/home/dmadmin/test.jy")

import traceback
import com.documentum.fc.client as DFCClient
import com.documentum.fc.common as DFCCommon

docbroker = "dmtest"
docbase = " dmtest"
username = "dmadmin"
password = "dmadmin"
print("attempting to connect to " + docbase + " as " + username + "/" + password + " via docbroker " + docbroker)
try:
  client = DFCClient.DfClient.getLocalClient()

  config = client.getClientConfig()
  config.setString ("primary_host", docbroker)

  logInfo = DFCCommon.DfLoginInfo()
  logInfo.setUser(username)
  logInfo.setPassword(password)
  docbase_session = client.newSession(docbase, logInfo)

  if docbase_session is not None:
    print("Got a session")
    # do something in the session;
    folderId = DFCCommon.DfId("0c00c35080000105");
    folder = docbase_session.getObject(folderId);
    attrList = "r_object_id,r_object_type,object_name";
    coll = folder.getContents(attrList);
    while(coll.next()):
       print("ObjectId=" + "%-16s" % coll.getString("r_object_id") + 
             " r_object_type=" + "%-20s" % coll.getString("r_object_type") +
             " ObjectName=" + coll.getString("object_name"));
    print("Finished");
    docbase_session.disconnect()
  else:
    print("Didn't get a session")
except Exception:
    print(Exception)

Execution:

jython test.jy
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.python.core.PySystemState (file:/usr/share/java/jython-2.5.3.jar) to method java.io.Console.encoding()
WARNING: Please consider reporting this to the maintainers of org.python.core.PySystemState
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
attempting to connect to dmtest as dmadmin/dmadmin via docbroker dmtest
Got a session
ObjectId=0800c3508000019b r_object_type=dm_job ObjectName=dm_PropagateClientRights
ObjectId=0800c3508000019c r_object_type=dm_job ObjectName=dm_PostUpgradeAction
ObjectId=0800c35080000408 r_object_type=dmc_wfsd_type_info ObjectName=integer
ObjectId=0800c35080000409 r_object_type=dmc_wfsd_type_info ObjectName=boolean
ObjectId=0800c3508000040a r_object_type=dmc_wfsd_type_info ObjectName=double
...
ObjectId=3a00c35080000227 r_object_type=dm_location ObjectName=replica_storage_01
ObjectId=3e00c35080000149 r_object_type=dm_mount_point ObjectName=share
ObjectId=6700c35080000100 r_object_type=dm_plugin ObjectName=CSEC Plugin
ObjectId=6700c35080000101 r_object_type=dm_plugin ObjectName=Snaplock Connector
Finished

Ironically, the jython’s launcher is a perl script; it basically initializes java and python environment variables such as classpath, java options and jython home and path. If the initial WARNINGs are disruptive, edit that script and correct the problem or just redirect the stderr to null, e.g.:

jython test.jy 2> /dev/null

So, which one to choose ?

To summarize, the decision tree below may help choosing one scripting language among the preceding ones.

DfCs knowledge ?:
java proficiency ?:
choose groovy with the DfCs
else python proficiency ?:
choose jython with the DfCs
else select one of the following ones:
get acquainted with one of the above languages
| choose another JVM-based language
| give up the DfCs and use DQL/API with perl, python of gawk instead (see below)
else perl proficiency ?:
choose db::Documentum
else python proficiency ?:
choose python and DctmAPI.py
else nawk/gawk proficiency ?
choose gawk and dmgawk binding
else select one of the following:
learn one of the above scripting languages
| develop a Documentum binding for your preferred scripting language not in the list
| hire dbi-services for your administrative tasks or projects ;-)

DfCs are clearly indicated to java programmers. They are still supported and new features are always accessible from them. There are tasks which cannot be done through the API or DQL and only through the DfCs, but generally those are out of the scope of an administrator. Note that even the non java and DfCs languages still finish up invoking the DfCs in the background because they are linked with the libdmcl.so library and that one makes JNI behind-the-scene calls to the DfCs for them, thus hiding their complexity. Hopefully, this shared library will stay with us still for some time otherwise our scripting language choice will be seriously restricted to JVM-based languages and the DfCs.

Cet article A few scripting languages for Documentum est apparu en premier sur Blog dbi services.

See You At The Edge Conference in Austin

Anthony Shorten - Thu, 2019-02-14 16:31
I will be attending the Oracle Utilities Edge Conference in Austin TX in March. This year the agenda is slightly different to past years with the Technical Sessions being part of the Cloud and Technology Track with the Cloud sessions so I will have lots more speakers this year. I will be running a few sessions around our next generation Testing solution, migration to the cloud, a deep dive into our futures as well as co-chairing an exciting session on our directions in Machine Learning in the Oracle Utilities stack.

Oracle Create Schema – multiple DDL statements in a single transaction

The Anti-Kyte - Thu, 2019-02-14 16:08

I was going to begin with some extended Brexit metaphor to illustrate the chaos and confusion that can ensue when you first encounter Oracle’s CREATE SCHEMA command.
Fortunately, the Dutch Government saved me the trouble :

dutch_brexit_monster

Much as I’d like to believe that the Cookie Monster has finally overcome his Sesame Street type casting, I can’t help noticing that the Brexit Monster never seems to in the same room as this guy…

boris

In Oracle, the term “schema” is used interchangeably with the term “user”. Creating a user in Oracle automatically creates a schema of the same name belonging to that user.
The process is so seamless that it’s almost impossible to tell where one begins and the other ends.
You may therefore be somewhat confused the first time you encounter Oracle’s CREATE SCHEMA command…

Contrary to what you might expect, CREATE SCHEMA does not enable you to create a database user. What it does do is to enable you to run multiple DDL statements inside a single transaction.
Now, if you thought that any DDL statement in Oracle would end with an implicit commit of the current transaction, well that makes two of us (at least).

To demonstrate why this is not necessarily true, let’s say we have a user created like this :

create user brexit_monster identified by lets_go_dutch 
    default tablespace users quota 5M on users
/

grant create session, create table, create view, create procedure to brexit_monster
/

…and a role created like this :

create role erg
/

If we want to create a table in the brexit_monster schema, then a view on that table, followed by a grant to the ERG role we could achieve this with three separate statements in a script…

create table brexit_types( brexit_cid varchar2(30),  classification varchar2(65000))
/

create or replace view promised_land as select brexit_cid from brexit_types where classification = 'HARD'
/

grant select on promised_land to erg
/

…however, if the first statement fails, the next two will also fail leaving something rather messy…

script_errors

If instead, we were to wrap these statements into a single CREATE SCHEMA…

create schema authorization brexit_monster
    create table brexit_types( 
        brexit_cid varchar2(30),  
        classification varchar2(65000))
    create view promised_land as 
        select brexit_cid 
        from brexit_types 
        where classification = 'HARD'
    grant select on promised_land to erg
/   
    

…the error stack is somewhat more manageable

cs_err1

Note however, that the View statement has changed a bit. It’s now a straight CREATE rather than CREATE OR REPLACE.
In fact, if you try to plug any DDL statement into CREATE SCHEMA that is not either a CREATE TABLE, CREATE VIEW or GRANT then you will get :

ORA-02422: missing or invalid schema element

If we issue the correct statement…

create schema authorization brexit_monster
    create table brexit_types( 
        brexit_cid varchar2(30),  
        classification varchar2(5))
    create view promised_land as 
        select brexit_cid 
        from brexit_types 
        where classification = 'HARD'
    grant select on promised_land to erg
/   


Schema created.


…we can see that all of the DDL has been executed…

select object_name, object_type
from user_objects
order by 2
/

OBJECT_NAME		       OBJECT_TYPE
------------------------------ -----------------------
BREXIT_TYPES		       TABLE
PROMISED_LAND		       VIEW

select grantee, privilege
from user_tab_privs
where owner = 'BREXIT_MONSTER'
and table_name = 'PROMISED_LAND'
/

GRANTEE 		       PRIVILEGE
------------------------------ ------------------------------
ERG			       SELECT

At this point you may still be somewhat skeptical about whether all this really happens in a single transaction.

I mean, how do we know that Oracle isn’t just parsing each statement to make sure they’re all valid and then trusting it won’t hit a runtime error ?
One way to find out is, of course, to engineer a runtime error.

You remember when I created the BREXIT_MONSTER user and you thought that I was a bit stingy with the tablespace quota allocation ? Well…

set serverout on
exec dbms_output.put_line('Current transaction = '||dbms_transaction.local_transaction_id(true));

create schema authorization brexit_monster
    create wto_terms(is_problem varchar2(3) default 'NO')
    create table little_objects as select * from all_objects fetch first 5 rows only
    create table my_objects as select * from all_objects
/   
exec dbms_output.put_line('Current transaction='||dbms_transaction.local_transaction_id(true));

When we run this we get …

runtime_err

We can see that the local_transaction_id has changed. So the transaction that the CREATE SCHEMA was running in has ended. Question is – has it been commited or rolled back ?
Now to check if any of the tables have been created…

select table_name
from user_tables
where table_name in ('WTO_TERMS', 'LITTLE_OBJECTS')
/

no rows selected

We could go into great detail here and do some digging around in trace files.
Then again, there’s been quite enough procrastination around this whole Brexit business already.
As we can see, the functionality of CREATE SCHEMA is that it does execute multiple DDL statements in a single database transaction – i.e. all statements succeed or none do.
In other words, if it walks like a Brexit Monster and talks about sunlight uplands it’s probably Boris Johnson.

Pages

Subscribe to Oracle FAQ aggregator