Feed aggregator

Polymorphic Table Functions – Part 2

Bar Solutions - 16 hours 52 min ago

In my previous post I wrote about a possible use-case for Polymorphic Table Functions. I also wrote that I had some ideas to implement extra functionality to make the function more generic.
After attending DOAG Konferentz und ausstellung 2018 and UKOUG Tech18 I finally came around to publishing my new version.
Lets start off with a table to hold my data, I chose to use the NATO phonetic alphabet for this:

create table t_alphabet
(thecount number
,alphabet varchar2(4000)
)
/
begin
  insert into t_alphabet(alphabet, thecount) values ('Alfa;Bravo;Charlie;Delta;Echo;Foxtrot;Golf',7);
  insert into t_alphabet(alphabet, thecount) values ('Hotel;India;Juliett;Kilo;Lima;Mike;November',7);
  insert into t_alphabet(alphabet, thecount) values ('Oscar;Papa;Quebec;Romeo;Sierra;Tango;Uniform',7);
  insert into t_alphabet(alphabet, thecount) values ('Victor;Whiskey;;X-ray;Yankee;;Zulu',5);
  insert into t_alphabet(alphabet, thecount) values ('Alfa.Bravo.Charlie.Delta.Echo.Foxtrot.Golf',7);
  insert into t_alphabet(alphabet, thecount) values ('Hotel.India.Juliett.Kilo.Lima.Mike.November',7);
  insert into t_alphabet(alphabet, thecount) values ('Oscar.Papa.Quebec.Romeo.Sierra.Tango.Uniform',7);
  insert into t_alphabet(alphabet, thecount) values ('Victor.Whiskey..X-ray.Yankee..Zulu',5);
  commit;
end;
/

First of all, I learned from attending the presentations by attending presentations by Andrej Pashchenko: Polymorphic Table Functions in 18c: Einführung und Beispiele and Keith Laker: Patterns and Use Cases For Polymorphic Tables that there is no need for global (package) variables to have access to the parameters supplied. The more I can rely on Oracle to take care of the value of variables, the better I like it.
I won’t bore you with all the intermediate versions of the code, lets jump straight into the ‘final’ result.
The package:

create or replace package separated_ptf is
  function describe(tab        in out dbms_tf.table_t
                   ,cols       in dbms_tf.columns_t default null
                   ,coltosplit in varchar2 default null
                   ,separator  in varchar2 default ';') return dbms_tf.describe_t;

  procedure fetch_rows(coltosplit in varchar2 default null
                      ,separator  in varchar2 default ';');
end separated_ptf;
/

The package body:

create or replace package body separated_ptf as
  function describe(tab        in out dbms_tf.table_t
                   ,cols       in dbms_tf.columns_t default null
                   ,coltosplit in varchar2 default null
                   ,separator  in varchar2 default ';') return dbms_tf.describe_t as
    -- metadata for column to add
    l_new_col dbms_tf.column_metadata_t;
    -- table of columns to add
    l_new_cols dbms_tf.columns_new_t;
    -- make sure the column to split is in the correct format (uppercase with doublequotes)
    l_coltosplit dbms_quoted_id := dbms_assert.enquote_name(str => coltosplit, capitalize => true);
  begin
    -- if the coltosplit parameter is null then
    if coltosplit is null then
      -- Mark the first column ReadOnly and don't display it anymore
      tab.column(1).for_read := true;
      tab.column(1).pass_through := false;
    else
      -- if the coltosplit parameter is not null then
      -- check every column from the source table
      for indx in tab.column.first .. tab.column.last loop
        -- if this is the column we want to split then
        if tab.column(indx).description.name = l_coltosplit then
          -- Mark this column ReadOnly and don't display it anymore
          tab.column(indx).for_read := true;
          tab.column(indx).pass_through := false;
        end if;
      end loop;
    end if;
    -- Add the new columns, as specified in the cols parameter
    for indx in 1 .. cols.count loop
      -- define metadata for column named cols(indx)
      -- that will default to a datatype of varchar2 with
      -- a length of 4000
      l_new_col := dbms_tf.column_metadata_t(name => cols(indx));
      -- add the new column to the list of columns new columns
      l_new_cols(l_new_cols.count + 1) := l_new_col;
    end loop;
    -- Instead of returning NULL we will RETURN a specific
    -- DESCRIBE_T that adds new columns
    return dbms_tf.describe_t(new_columns => l_new_cols);
  end;

  procedure fetch_rows(coltosplit in varchar2 default null
                      ,separator  in varchar2 default ';') is
    -- define a table type of varchar2 tables
    type colset is table of dbms_tf.tab_varchar2_t index by pls_integer;
    -- variable to hold the rowset as retrieved
    l_rowset dbms_tf.row_set_t;
    -- variable to hold the number of rows as retrieved
    l_rowcount pls_integer;
    -- variable to hold the number of put columns
    l_putcolcount pls_integer := dbms_tf.get_env().put_columns.count;
    -- variable to hold the new values
    l_newcolset colset;
    -- get the name of the column to be split from the get columns
    l_coltosplit dbms_quoted_id := trim('"' from dbms_tf.get_env().get_columns(1).name);
  begin
    --    dbms_tf.Trace(dbms_tf.Get_Env);
    -- fetch rows into a local rowset
    -- at this point the rows will have columns
    -- from the the table/view/query passed in
    dbms_tf.get_row_set(l_rowset, l_rowcount);
    -- for every row in the rowset...
    for rowindx in 1 .. l_rowcount loop
      -- for every column
      for colindx in 1 .. l_putcolcount loop
        -- split the row into separate values
        --  FUNCTION Row_To_Char(rowset Row_Set_t,
        --                       rid    PLS_INTEGER,
        --                       format PLS_INTEGER default FORMAT_JSON)
        --           return VARCHAR2;
        -- splitting the regexp way: http://nuijten.blogspot.com/2009/07/splitting-comma-delimited-string-regexp.html
        l_newcolset(colindx)(rowindx) := trim(separator from regexp_substr(json_value(dbms_tf.row_to_char(l_rowset, rowindx), '$.' || l_coltosplit)
                                                           ,'[^' || separator || ']*' || separator || '{0,1}'
                                                           ,1
                                                           ,colindx));
      end loop; -- every column
    end loop; -- every row in the rowset
    -- add the newly populated columns to the rowset
    for indx in 1 .. l_putcolcount loop
      dbms_tf.put_col(columnid => indx, collection => l_newcolset(indx));
    end loop;
  end;
end separated_ptf;
/

The wrapper function:

create or replace function separated_fnc(p_tbl      in table
                                        ,cols       columns default null
                                        ,coltosplit in varchar2 default null
                                        ,separator  in varchar2 default ';') return table
  pipelined row polymorphic using separated_ptf;
/

Back to the improvements I suggested earlier.
Supporting duplicate separators:
Using the Regular Expression ‘[^;]+’ didn’t make this possible, because double ; (;;) would be regarded as one. So this had to be changed into ‘[^;]+;{0,1}’. This expression says (in my words): find all the characters which are not a ; followed by 0 or 1 ;. Since this will result in a string with a ; at the end I had to add the trim function around it.
Making the column to be split up a parameter
To find the column to be split I need to make the value look the same as the tab.column(indx).description.name value I can of course add quotes around the parameter myself, but I think it is better to use the built in sys.dbms_assert.enquote_name function to do this. If the value is not supplied or null I just (try to) split the first column.
Making the separator character a parameter
This seemed like an easy task, just replace every ; in my code by a variable, but when running a couple of tests I received an error which put me on the wrong path.
Calling the function using positioned parameters works like a charm, just as you would expect it:
select *
from separated_fnc(t_alphabet, columns(first, second, third, fourth, fifth, sixth, seventh),’alphabet’,’.’)
/

  THECOUNT FIRST                SECOND               THIRD                FOURTH               FIFTH                SIXTH                SEVENTH
---------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- --------------------
         7 Alfa;Bravo;Charlie;D                                                                                                          
           elta;Echo;Foxtrot;Go                                                                                                          
           lf                                                                                                                            
         7 Hotel;India;Juliett;                                                                                                          
           Kilo;Lima;Mike;Novem                                                                                                          
           ber                                                                                                                           
         7 Oscar;Papa;Quebec;Ro                                                                                                          
           meo;Sierra;Tango;Uni                                                                                                          
           form                                                                                                                          
         5 Victor;Whiskey;X-ray                                                                                                          
           ;Yankee;Zulu                                                                                                                  
         7 Alfa                 Bravo                Charlie              Delta                Echo                 Foxtrot              Golf
         7 Hotel                India                Juliett              Kilo                 Lima                 Mike                 November
         7 Oscar                Papa                 Quebec               Romeo                Sierra               Tango                Uniform
         5 Victor               Whiskey                                   X-ray                Yankee                                    Zulu

8 rows selected

But when I tried to use named parameters like this:

select *
  from separated_fnc(t_alphabet, columns(first, second, third, fourth, fifth, sixth, seventh),coltosplit => 'alphabet',separator => '.')
/

it resulted in the following error:

ORA-62573: new column (FIRST) is not allowed with describe only polymorphic table function

After the DOAG conference I looked at this together with Chris Saxon, but we couldn’t find what is going on. So we contacted Keith Laker and he told me that it was bug in the version (18.3.0.0.0) of the database I am using and that it should be fixed in an upcoming (patch) release.
If you know about this behavior, I think it is quite a useful function (and with the upcoming fixes it will become even more useful).

If you have any suggestions or maybe ideas for other use cases for Polymorphic Table Functions, please don’t hesitate to use the comments.

Cloud Control 13c Release 3 Database Template

Michael Dinh - Sat, 2018-12-08 18:20

Got excited to find:
Creating a Database Instance with Preconfigured Repository Using Database Templates

Templates can be download at:
Database Template (with EM 13.3.0.0 repository pre-configured) for Installing Oracle Enterprise Manager Cloud Control 13c Release 3 (13.3.0.0)

Got disappointed to find template is only available for 12.1.0.2.0 DB and 18.1.0.0.0 DB.

Creating a Database Instance with Preconfigured Repository Using Database Templates Caution:

Make sure that the 18.1.0.0.0_Database_Template_for_EM13_3_0_0_0_.zip file is used only for Enterprise Manager Cloud Control 13c Release 3.

It looks like the template version must match exactly from an example below.

EM 13c: Creating an Enterprise Manager 13c Cloud Control Repository using a Database Template Fails: ORA-00603: ORACLE server session terminated by fatal error (Doc ID 2291220.1)	

While the 12.2.0.1 version of the database is supported for EM 13.2 repository use, a database template for the EM 13.2 repository database is only supplied for the 12.1.0.2 version of the database, and they are specific for the version:

Oracle skipped 12.2 DB version when creating DB template which is disappointing.

Does database really need to be on 12.2?

First immersion in the Docker Conference EU 2018

Yann Neuhaus - Sat, 2018-12-08 12:02

In short, a very interesting event for both Devs and Ops. Every day was organized around workshops, hands-on-labs sessions and Hallway tracks. My colleague Mehdi Bada and I tried to attend as much sessions as possible but obviously it was difficult to cover all the topics.

blog 149 - 0 - dockercon small

Anyway, workshops and hands-on-labs were very interesting especially if you like to mix theory and practice. But I had to admit sometimes we got in trouble to keep up the pace of some workshops. Regarding the workshop we ran into what I call the “Context switch” issue between following the trainer’s explanation and doing exercises at the same time :) The migrating .NET applications to Docker workshop with Elton Stoneman (Docker) was one that comes I mind in this case :)

As database specialists at dbi services we obviously had a special focus on storage-oriented topics and we were interested in attending sessions and workshops on this topic including Use Cases and Practical Solutions for Docker Container Storage on Swarm and K8s session with Don Stewart (Docker) and Mark Church (Docker) as well as Container Storage Panel Q&A with Ed Beauvais (Oracle), Chris Brandon (Storage OS Inc) and Keith Hudgins (Docker). We got an overview of different possible solutions to implement as file-based, block-based and object-based storage in order to address different pattern workloads including fileserver, OLTP, BigData etc. Container Storage Landscape is large and vendor-specific actually but Docker storage team announced some plans to introduce first snapshot / restore capabilities and to provide an CSI (Common Storage Interface) to offer a simple community driven approach and a more predictable and functional interface for most common use cases as well. Let’s see what’s happen in the future but my guess (speculation mode) is that for “specific” applications like databases, vendor storage drivers will likely remain the most viable option when performance will be at the heart of concerns.

blog 149 - 1 - storage drivers

Even if containers are formally design to handle stateless applications it is not uncommon to see databases in such infrastructure nowadays. After all databases are also (special) applications, right? I was already convinced by the fact that containerization infrastructure was now enough mature to handle database workloads, these sessions reinforced my strong belief that Docker Swarm or K8s are production database ready from a storage perspective at least.

We also got the opportunity to attend to workshops and sessions around container orchestrator topics including mainly Docker Swarm and K8s orchestration. It was interesting to see that the same question often raised by attendees during these sessions: Do we have to use Swarm over K8s and vice-versa-ca? This is also a question we are going to ask for a dbi services internal project by the way and obviously, there is no black-or-white response. What is certain is that Docker Swarm remains important for customers as confirmed by Steve Singh during the first general session on Tuesday 4th December 2018. We got feedback from customer stories like Citizens bank that an orchestrator choice depends on different factors and in the context of this customer, they are using the both from Docker EE :) We also attended to other interesting Swam and K8s infrastructure topics including Swarm Orchestration – features and workflows by Bret Fisher (Docker Captain) and Container Networking for Swarm and Kubernetes in Docker Enterprise by Guillaume Morini (Docker) as well. Finally, and probably one of my favorite workshops was troubleshooting with sysdig by Michael Ducy. Sysdig is part of well-known monitoring / troubleshooting tools for containers in the market. Let’s say that it was a subtle combination between deep dive immersion of Linux kernel principals and practical scenarios about using sysdig tools to fix container issues as confirmed by my working desktop below:

blog 149 - 2- WS sysdig

In addition to sessions, workshops and hands-on-labs, new announcements were done at the DockerCon EU 2018, during general sessions with Steve Singh (CEO) and Scott Johnston (Chief Product Officer) as main speakers.

blog 149 - 3 - dockercon general session 1

First general session announcements include new innovative tools including docker-app, docker-assemble and enhancement of docker stack support for both Swarm and Kubernetes since Docker EE 2.0 and probably the most expected one: Docker Desktop Enterprise. It turns out that the adoption of Docker Desktop from developers was a real success but not really designed to scale to Enterprise-class environment and this is basically what Docker Desktop enterprise product is supposed to address.

It was also an opportunity to get some interesting figures about Docker (EE) adoption across the world:

  • 1M of new developer
  • 5M of new applications
  • 1B of containers downloaded every week
  • 650+ customers on docker EE
  • 76.4% of companies running mission-critical apps in containers in production

The last one is by far my favorite because it highlights that most of Docker environments are not anymore developer-scoped limited environments. Moreover, it is worth noting that the other following figures seem to point out that Docker is not a visionary developer whim anymore and it drives a strong adoption for customer due to an interesting ROI:

  • 69% differentiating products and services vs competitors
  • 70% => increasing sales of product
  • 71% bringing products to market faster

Finally, let’s finish with the second general session that was more Docker community-oriented and I know how important community may be for contribution and to bring people for interaction as well. As Microsoft with MVPs, Docker Captains are the Docker counterpart and were thanked for their wonderful contribution during this event. But obviously contribution is beyond MVPs, ACEs or Docker captains and Kal De (CTO, EVP, Product Development) explained how to contribute to different Docker projects and showed then contribution figures from the community through GitHub:

  • Compose 1 MM monthly
  • 25K new compose files published on GitHub per week
  • 14K GitHub contributors – 280+ people

This first immersion in the Docker World conference was definitely a good experience and a great opportunity to feel the emphasis around Docker and future directions made by the company. I also appreciated discussions and feedbacks from some attendees during network track to prepare our future challenges on this topic.

 

 

 

Cet article First immersion in the Docker Conference EU 2018 est apparu en premier sur Blog dbi services.

The First Open, Multi-cloud Serverless Platform for the Enterprise Is Here

Pas Apicella - Sat, 2018-12-08 05:30
That’s Pivotal Function Service, and it’s available as an alpha release today. Read more about it here

https://content.pivotal.io/blog/the-first-open-multi-cloud-serverless-platform-for-the-enterprise-is-here-try-out-pivotal-function-service-today

Docs as follows

https://docs.pivotal.io/pfs/index.html
Categories: Fusion Middleware

AWS: IAM & security - Best practices: Using a non-root user

Dietrich Schroff - Fri, 2018-12-07 18:35
After my successful solutions architect practice i knew that i had to take care of my shortcomings in security.
So i decided to visit the IAM (Identity and Access Managemen) of AWS:

So let's move to IAM users:
Click an "add user"

Then insert a "user name", choose an access type and click next:

Then you have to create the first group:

 I created a group with full AWS access:



Then move on with creating the user:

 Additional Tags:
 And finally click "create user"
 This will show you a page with an AWS management console URL:

Use this URL to login with the new user:
 (i had to change the passwort - the checkbox "require password reset")

And then i am logged into my AWS Management console with this non-root user:


If you want to login with your root user, you have to use the link blow the "sign in" button:

PostgreSQL : Get my database server name

Yann Neuhaus - Fri, 2018-12-07 16:32

I was looking for a build-in function to get the hostname of the server hosting my PostgreSQL cluster.
But seems that there is no build-in function. Looking in the extensions, I find the extension hostname which can allow to get the database server host name.
In this this blog I am explaining how to install and how to use it. The installation is very easy. The first step is to download it here .

After let’s go to the directory where the archive was decompressed and let’s run the command make

master/pg-hostname-master/ [PG1] ls
Changes  doc  hostname.control  Makefile  META.json  README.md  sql  src  test
14:46:26 postgres@dbi-pg-essentials:/home/postgres/pg-hostname-master/pg-hostname-master/ [PG1] ls
Changes  doc  hostname.control  Makefile  META.json  README.md  sql  src  test
14:46:48 postgres@dbi-pg-essentials:/home/postgres/pg-hostname-

master/pg-hostname-master/ [PG1] make
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fpic -I. -I./ -I/u01/app/postgres/product/95/db_0/include/server -I/u01/app/postgres/product/95/db_0/include/internal -D_GNU_SOURCE -I/usr/include/libxml2   -c -o src/hostname.o src/hostname.c
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fpic -L/u01/app/postgres/product/95/db_0/lib -Wl,--as-needed -Wl,-rpath,'/u01/app/postgres/product/95/db_0/lib',--enable-new-dtags  -shared -o src/hostname.so src/hostname.o
cp sql/hostname.sql sql/hostname--1.0.0.sql
14:46:55 postgres@dbi-pg-essentials:/home/postgres/pg-hostname-

Once done let’s run the command make install

pg-hostname-master/ [PG1] make install
/bin/mkdir -p '/u01/app/postgres/product/95/db_0/share/extension'
/bin/mkdir -p '/u01/app/postgres/product/95/db_0/share/extension'
/bin/mkdir -p '/u01/app/postgres/product/95/db_0/lib'
/bin/mkdir -p '/u01/app/postgres/product/95/db_0/share/doc/extension'
/bin/install -c -m 644 .//hostname.control '/u01/app/postgres/product/95/db_0/share/extension/'
/bin/install -c -m 644 .//sql/hostname--1.0.0.sql .//sql/hostname--unpackaged--1.0.0.sql  '/u01/app/postgres/product/95/db_0/share/extension/'
/bin/install -c -m 755  src/hostname.so '/u01/app/postgres/product/95/db_0/lib/'
/bin/install -c -m 644 .//doc/hostname.mmd '/u01/app/postgres/product/95/db_0/share/doc/extension/'
14:47:29 postgres@dbi-pg-essentials:/home/postgres/pg-hostname-master/pg-hostname-master/ [PG1]

If everything is ok we should now have the extension in our $PGHOME/share/extension

15:02:39 postgres@dbi-pg-essentials:[PG1] ls -ltra *hostname*
-rw-r--r--. 1 postgres postgres 140 Dec  5 14:47 hostname.control
-rw-r--r--. 1 postgres postgres  96 Dec  5 14:47 hostname--1.0.0.sql
-rw-r--r--. 1 postgres 

And that’s all. We just have now have to install the extension in the database

postgres=# CREATE EXTENSION hostname;
CREATE EXTENSION

And then now we can have the hostname of our the server

postgres=# SELECT hostname();
     hostname
-------------------
 dbi-pg-essentials
(1 row)

postgres=#

Cet article PostgreSQL : Get my database server name est apparu en premier sur Blog dbi services.

PostgreSQL 12 : New option –socketdir for pg_upgrade

Yann Neuhaus - Fri, 2018-12-07 16:30

PostgreSQL 12 is under development but tests can be done. Steps to install this version can be found in this dbi blog . Many thanks to Daniel.
While reading the documentation I found that there is a new parameter for pg_upgrade. This new parameter is –socketdir.
Why this parameter?
In fact the path name of a UNIX socket is allowed to be maximally 108 chars long. Before PostgreSQL 12, the default directory for the sockets created for the temporary postmasters started by pg_upgrade was the current directory. But depending of the current directory the pathname might be very long for a socket name. In PostgreSQL 12 The default location is still the current working directory, but the parameter socketdir now allows us to specify another location

To better understand I am going to upgrade from PostgreSQL 10 to PostgreSQL 12

20:59:44 postgres@dbi-pg-essentials:/u02/pgdata/PG12TEST/ [PG12TEST] psql -U postgres -d postgres
psql (12devel dbi services build)
Type "help" for help.

postgres=# select version();
                                                            version
---------------------------------------------------------------------------------------------------------------------
 PostgreSQL 12devel dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11
(1 row)

postgres=#

Now let’s create a very deeply nested directory

mkdir -p ttttttttttttttttttttttttttttttttttttttttt/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd/UUUUUUUUUUUUUUUUUUUUUUUUUUUUUuuuuuuuuuuuuuuuuuuuuuuuuuuuu/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd

And let’s do a cd into this new directory and let’s try to do an upgrade

$ export PGDATAOLD=/u02/pgdata/PG3
$ export PGBINOLD=/u01/app/postgres/product/10/db_1/bin
$ export PGDATANEW=/u02/pgdata/PG12TEST
$ export PGBINNEW=/u01/app/postgres/product/12dev/db_0/bin

When running the pg_upgrade with the check option, we got following errors

$ pg_upgrade -c
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok

*failure*
Consult the last few lines of "pg_upgrade_server.log" for
the probable cause of the failure.

connection to database failed: Unix-domain socket path "/home/postgres/ttttttttttttttttttttttttttttttttttttttttt/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd/UUUUUUUUUUUUUUUUUUUUUUUUUUUUUuuuuuuuuuuuuuuuuuuuuuuuuuuuu/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd/.s.PGSQL.50432" is too long (maximum 107 bytes)

could not connect to source postmaster started with the command:
"/u01/app/postgres/product/10/db_1/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/u02/pgdata/PG3" -o "-p 50432 -b  -c listen_addresses='' -c unix_socket_permissions=0700 -c unix_socket_directories='/home/postgres/ttttttttttttttttttttttttttttttttttttttttt/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd/UUUUUUUUUUUUUUUUUUUUUUUUUUUUUuuuuuuuuuuuuuuuuuuuuuuuuuuuu/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd'" start
Failure, exiting
$

Seems that the pathname for the socket is very long.
And if we use this new parameter, we can specify a new location for the sockets. And we can see that the checks are now successful

$ pg_upgrade -c --socketdir=/home/postgres/
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for tables WITH OIDs                               ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

*Clusters are compatible*

And then we can upgrade the cluster using this new parameter and still staying in this new created directory

$ pg_upgrade  --socketdir=/home/postgres/
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for tables WITH OIDs                               ok
Creating dump of global objects                             ok
Creating dump of database schemas
                                                            ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
------------------
Analyzing all rows in the new cluster                       ok
Freezing all rows in the new cluster                        ok
Deleting files from new pg_xact                             ok
Copying old pg_xact to new server                           ok
Setting next transaction ID and epoch for new cluster       ok
Deleting files from new pg_multixact/offsets                ok
Copying old pg_multixact/offsets to new server              ok
Deleting files from new pg_multixact/members                ok
Copying old pg_multixact/members to new server              ok
Setting next multixact ID and offset for new cluster        ok
Resetting WAL archives                                      ok
Setting frozenxid and minmxid counters in new cluster       ok
Restoring global objects in the new cluster                 ok
Restoring database schemas in the new cluster
                                                            ok
Copying user relation files
                                                            ok
Setting next OID for new cluster                            ok
Sync data directory to disk                                 ok
Creating script to analyze new cluster                      ok
Creating script to delete old cluster                       ok

Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
    ./analyze_new_cluster.sh

Running this script will delete the old cluster's data files:
    ./delete_old_cluster.sh
$
Conclusion :

In this blog we have seen the new option –socketdir for pg_upgrade. It’s a good thing to know that this parameter exists, but in most case the current working directory should be ok for an upgrade

Cet article PostgreSQL 12 : New option –socketdir for pg_upgrade est apparu en premier sur Blog dbi services.

Ubuntu: if WLAN is gone after an update and lspci lists no wlan devices

Dietrich Schroff - Fri, 2018-12-07 12:42
On my ubuntu laptop after an update the WLAN networking was gone. First check was to boot into Windows and check wether the WLAN devices is still working:
The WLAN still worked with Windows, so the worst case (hardware error) did not hit me ;-)

So i switched back to Ubuntu and tried the following:

$ lspci -nnk | grep -iA2 net;
02:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 15)
    Subsystem: Acer Incorporated [ALI] RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [1025:104c]
    Kernel driver in use: r8169
    Kernel modules: r8169
Very strange - the WLAN network adapter was not even listed with lspci!
Another check shows:
$ lshw|grep -iA2 network
WARNUNG: Sie sollten dieses Programm mit Systemverwalterrechten (root) ausführen.
           *-network
                Beschreibung: Ethernet interface
                Produkt: RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller
It look like the hardware controller is missing, but the check with Windows explicitly showed, that the WLAN is still ok.

The solution was to run:
apt-get install bcmwl-kernel-source
(After connecting the laptop via ethernet to my router)

Then the commands showed up again with the WLAN adapter:
$ lspci -nnk | grep -iA2 net;
02:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 15)
    Subsystem: Acer Incorporated [ALI] RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [1025:104c]
    Kernel driver in use: r8169
    Kernel modules: r8169
03:00.0 Network controller [0280]: Qualcomm Atheros QCA6174 802.11ac Wireless Network Adapter [168c:003e] (rev 32)
    Subsystem: Lite-On Communications Inc QCA6174 802.11ac Wireless Network Adapter [11ad:0807]
    Kernel driver in use: ath10k_pci
    Kernel modules: ath10k_pci, wl

$ lshw|grep -iA2 network

WARNUNG: Sie sollten dieses Programm mit Systemverwalterrechten (root) ausführen.

           *-network

                Beschreibung: Ethernet interface

                Produkt: RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller

--

           *-network

                Beschreibung: Kabellose Verbindung

                Produkt: QCA6174 802.11ac Wireless Network Adapter

                Hersteller: Qualcomm Atheros

                Physische ID: 0

I found the following with google:
https://ubuntuforums.org/showthread.php?t=1411216


So let's try this:

# apt-mark hold bcmwl-kernel-source
bcmwl-kernel-source auf Halten gesetzt.

Plans and Trees

Jonathan Lewis - Fri, 2018-12-07 11:58

Prompted by a question on the ODC database forum – and also because I failed to get to the “Bonus slides” on my presentation on basic execution plans at both the DOAG and UKOUG conferences, here’s a small of slides demonstrating how to convert a text execution plan into a tree that you can read using the mechanism described in Oracle’s white paper by the phrase: “start from the bottom left and work across and then up”.

The file is a Microsoft Powerpoint file (early version).

 

Misdirection

Jonathan Lewis - Fri, 2018-12-07 05:48

A recent post on the ODC database forum prompted me to write a short note about a trap that catches everyone from time to time. The trap is following the obvious; and it’s a trap because it’s only previous experience that lets you decide what’s obvious and the similarity between what you’re looking and your previous experience may be purely coincidental.

The question on OTN (paraphrased) was as follows:

When I run the first query below Oracle doesn’t use the index on column AF and is slow, but when I run the second query the Oracle uses the index and it’s fast. So when the input starts with ‘\\’ the indexes are not used. What’s going on ?


SELECT * FROM T WHERE AF = '\\domain\test\1123.pdf';
SELECT * FROM T WHERE AF = 'a\\domain\test\1123.pdf';

Looking at the two queries my first thought was that it’s obvious what’s (probably) happening, and my second thought was the more interesting question: “why does this person think that the ‘\\’ is significant ?”

The cause of the difference in behaviour is probably related to the way that Oracle stores statistics (specifically histograms) about character columns, and the way in which the cardinality calculations can go wrong.  If two character match over the first few characters the numeric representation of those strings that Oracle uses in a histogram is identical, and if they are long enough even the “actual value” stored would be identical. It looks as if this person is storing URLs, and it’s quite likely that there are a lot of long URLs that start with the same (long) string of characters – it’s a very old problem – and it’s an example of a column where you probably want to be absolutely sure that you don’t gather a histogram.

But why did the OP decide that the ‘\\’ was the significant bit ? I don’t know, of course, but  how about this:

  • No contrary tests: Perhaps every single time the query misbehaved the value started with ‘\\’ and it never went wrong for any other starting values. And maybe the OP tested several different domain names – it would be much easier to see the ‘\\’ as the common denominator rather than “repetitive leading character string” if you tested with values that spanned different domains.

combined with

  • An easily available “justification”: In many programming languages (including SQL) ‘\’ is an escape character – if you don’t really know much about how the optimizer works you might believe that that could be enough to confuse the optimizer.

It can be very difficult when you spot an obvious pattern to pause long enough to consider whether you’ve identified the whole pattern, or whether you’re looking at a special case that’s going to take you in the wrong direction.

 

API for Amazon SageMaker ML Sentiment Analysis

Andrejus Baranovski - Thu, 2018-12-06 13:50
Assume you manage support department and want to automate some of the workload which comes from users requesting support through Twitter. Probably you already would be using chatbot to send back replies to users. Bu this is not enough - some of the support requests must be taken with special care and handled by humans. How to understand when tweet message should be escalated and when no? Machine Learning for Business book got an answer. I recommend to read this book, my today post is based on Chapter 4.

You can download source code for Chapter 4 from book website. Model is trained based on sample dataset from Kaggle - Customer Support on Twitter. Model is trained based on subset of available data, using around 500 000 Twitter messages. Book authors converted and prepared dataset to be suitable to feed into Amazon SageMaker (dataset can be downloaded together with the source code).

Model is trained in such way, that it doesn't check if tweet is simply positive or negative. Sentiment analysis is based on the fact if tweet should be escalated or not. It could be even positive tweet should be escalated.

I have followed instructions from the book and was able to train and host the model. I have created AWS Lambda function and API Gateway to be able to call model from the outside (this part is not described in the book, but you can check my previous post to get more info about it - Amazon SageMaker Model Endpoint Access from Oracle JET).

To test trained model, I took two random tweets addressed to Lufthansa account and passed them to predict function. I exposed model through AWS Lambda function and created API Gateway, this allows to initiate REST request from such tool as Postman. Response with __label__1 needs esacalation and __label__0 doesn't need. Second tweet is more direct and it refers immediate feedback, it was labeled for escalation by our model for sentiment analysis. First tweet is a bit abstract, for this tweet no escalation:


This is AWS Lambda function, it gets data from request, calls model endpoint and returns back prediction:

Let's have a quick look into training dataset. There are around 20% of tweets representing tweets marked for escalation. This shows - there is no need to have 50%/50% split in training dataset. In real life probably number of escalations is less than half of all requests, this realistic scenario is represented in the dataset:


ML model is built using Amazon SageMaker BlazingText algorithm:


Once ML model is built, we deploy it to the endpoint. Predict function is invoked through the endpoint:

Leveraging Google Cloud Search to Provide a 360 Degree View to Product Information Existing in PTC® Windchill® and other Data Systems

Most organizations have silos of content spread out amongst databases, file shares, and one or more document management systems. Without a unified search system to tap into this information, knowledge often remains hidden and the assets employees create cannot be used to support design, manufacturing, or research objectives.

An enterprise search system that can connect these disparate content stores and provide a single search experience for users can help organizations increase operational efficiencies, enhance knowledge sharing, and ensure compliance. PTC Windchill provides a primary source for the digital product thread, but organizations often have other key systems storing valuable information. That is why it is critical to provide workers with access to associated information regardless of where it is stored.

This past August, Fishbowl released its PTC Windchill Connector for Google Cloud Search. Fishbowl developed the connector for companies needing a search solution that allows them to spend less time searching for existing information and more time developing new products and ideas. These companies need a centralized way to search their key engineering information stores, like PLM (in this case Windchill), ERP, quality database, and other legacy data systems. Google Cloud Search is Google’s next generation, cloud-based enterprise search platform from which customers can search large data sets both on-premise and in the cloud while taking advantage of Google’s world-class relevancy algorithms and search experience capabilities.

Connecting PTC Windchill and Google Cloud Search

Through Google Cloud Search, Google provides the power and reach of Google search to the enterprise. Fishbowl’s PTC Windchill Connector for Google Cloud Search provides customers with the ability to leverage Google’s industry-leading technology to search PTC Windchill for Documents, CAD files, Enterprise Parts, Promotion Requests, Change Requests, and Change Notices. The PTC Windchill Connector for Google Cloud Search assigns security to all items indexed through the connector based on the default ACL configuration specified in the connector configuration. The connector allows customers to take full advantage of additional search features provided by Google Cloud Search including Facets and Spelling Suggestions just as you would expect from a Google solution.

To read the rest of this blog post and see an architecture diagram showing how Fishbowl connects Google Cloud Search with PTC Windchill, please visit the PTC LiveWorx 2019 blog.

The post Leveraging Google Cloud Search to Provide a 360 Degree View to Product Information Existing in PTC® Windchill® and other Data Systems appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

[Solved] Oracle EBS (R12) Installation Issue: Could not find the main class: oracle.apps.ad.rapidwiz.RIWizard

Online Apps DBA - Thu, 2018-12-06 06:52

Troubled with the issue while running rapidwiz on an Exadata database machine to install Oracle EBS R12.2 ? If yes, Visit: https://k21academy.com/appsdba39 and consider our new Blog Covering: ✔What is rapidwiz ✔Issues, Causes and Solution for Installing Through Rapidwiz Troubled with the issue while running rapidwiz on an Exadata database machine to install Oracle EBS […]

The post [Solved] Oracle EBS (R12) Installation Issue: Could not find the main class: oracle.apps.ad.rapidwiz.RIWizard appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

DockerCon18 Barcelona – Day 3

Yann Neuhaus - Thu, 2018-12-06 06:37

Wednesday was the last day of the DockerCon18 Europe. Like the previous day, we started the journey by a keynote of 2 hours, more oriented “Docker Community”, the core message of the keynote, the community is one of the pillars of open source technologies and Docker Inc want to push more and more the community aspect. The community is growing very fast and competitive.

They take the opportunity to award the community leader of the year and a new Docker Captain, Bret Fisher.

dockercommunity-min

Then we attended an interesting session: Docker Storage with Swarm and Kubernetes.

The guy who presented started the session with a funny part: Container Storage Fake News!! During a few minutes, he listed one by one all fake news related to storage in the container world. The best fake news for us:

RDBMS and databases cannot be run on containers: NO! Official images are available from the providers. The best example is SQL Server, who provide a very competitive Docker image for their users.

The core message of the session is that databases containers are coming more and more and will be used and deployed. The very interesting thing is about the collaboration between Docker and storage provider, who are developing API for Docker compatibility, so in the future, each storage provider will have its own API to communicate with Docker container.
 

The last but not least session of the day for me was about Provisioning and Managing Storage for Docker Containers.

The goal of the session was the explanation of How we can manage easily storage operations for containers.

Docker EE Platform with Kubernetes (with PV and PVC) help us in the future to manage storage operations for containers.

PV = Persistent Volume
PVC = Persistent Volume Claim

They present us, also, the difference between static and dynamic provisioning in Kubernetes and the future of storage management in Docker using CSI.

Core message: Docker is making storage a priority.

oznor

Cet article DockerCon18 Barcelona – Day 3 est apparu en premier sur Blog dbi services.

DockerCon18 Barcelona – Day 2

Yann Neuhaus - Thu, 2018-12-06 04:59

Tuesday was the second day in Barcelona for the DockerCon18. We attend the first general session in the morning. It was a mix of presentations, live demos and the participation of Docker big customer in EMEA such as Société Général in France, who present us the impact of Docker in their daily business. The main message of the first part of the keynote was: “How Docker can help you to make the digital transformation of your business”.

In the second part, new features were presented during the live demos:

  • docker stack deployment using Docker EE
  • docker-assemble: command: build docker image without configuration starting with a git repository of the application source code.
  • docker stack command: to deploy a Docker image using a compose file.
  • docker-app command: An utility to help make Compose files more reusable and shareable.

Then they present the introduction to Kubernetes support on Docker EE platform.

Screenshot 2018-12-06 at 11.02.01

Finally, they present the way to deploy an application with Docker Desktop Application Designer.

The keynote video is also available here, those interested.

After the keynote, we attended a very interesting workshop concerning the Storage in Docker EE platform 2.1, done by Don Stewart a Solution Architect at Docker.

storagews-min

In the lab, we discovered the types of storage options that are available and how to implement them within a container environment.
Lab link for those interested: https://github.com/donmstewart/docker-storage-workshop

The first session of the afternoon was about Docker Enterprise platform 2.1: Architecture Overview and Uses Cases.

 

The presentation was split into 3 main parts:

  • Docker Enterprise overview and architecture
  • Docker Enterprise 2.1 – What’s new with demos
  • Next steps

 

The first part of the presentation was more marketing oriented, by the presentation of the Docker Enterprise platform.

Then the following new features were presented including small demos:

  • Extended Windows Support
  • Extended Kubernetes Support: Windows Server 2016, 1709, 1803, 2019
  • Improve Operational Insights: node metrics, data retention overview, more metrics, and charts…
  • Image management and storage optimizations
  • Security improvements

oznor

We finish the conference day by a workshop again, yes…Because during this conference the level and the quality of the workshops was very good and interesting. The workshop was about Swarm Orchestration – Features and Workflows.
This was definitively one of the best workshops I attended.

Slides: https://container.training/swarm-selfpaced.yml.html#1
Github repository: https://github.com/jpetazzo/container.training

During this workshop, we create a complete Docker cluster using Swarm and deep dive into Swarm orchestration.

A very interesting day, with a lot of new things around Docker.

Cet article DockerCon18 Barcelona – Day 2 est apparu en premier sur Blog dbi services.

Fishbowl Resource Guide: Solidworks to PTC Windchill Data Migrations

Fishbowl has helped numerous customers migrate Solidworks data into PTC Windchill. We have proven processes and proprietary applications to migrate from SolidWorks Enterprise PDM (EPDM) and PDMWorks, and WTPart migrations including structure and linking. This extensive experience combined with our bulk loading software has elevated us as one of the world’s premiere PTC Data Migration specialists.

Over the years, we’ve created various resources for Windchill customers to help them understand their options to migrate Solidworks data into Windchill, as well as some best practices when doing so. After all, we’ve seen firsthand how moving CAD files manually wastes valuable engineering resources that can be better utilized on more important work.

We’ve categorized those resources below. Please explore them and learn how Fishbowl Solution can help you realize the automation gains you are looking for.

Blog Posts Infographic Webinar Brochures LinkLoader Web Page

The post Fishbowl Resource Guide: Solidworks to PTC Windchill Data Migrations appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Podcast: Inspiring Innovation and Entrepreneurism in Young People

OTN TechBlog - Wed, 2018-12-05 07:37

A common thread connecting the small army of IT professionals I've met over the last 20 years is that their interest in technology developed when they were very young, and that youthful interest grew into a full-fledged career. That's truly wonderful. But what happens if a young person never has a chance to develop that interest? And what can be done to draw those young people to careers in technology? In this Oracle Groundbreakers Podcast extra you will meet someone who is dedicated to solving that very problem.

Karla Readshaw is director of development for Iridescent, a non-profit organization focused on bringing quality STEM education (science, technology, engineering, and mathematics) to young people -- particularly girls -- around the globe.

"Our end goal is to ensure that every child, with a specific focus on underrepresented groups -- women and minorities -- has the opportunity to learn, and develop curiosity, creativity and perseverance, what real leaders are made of," Karla explains in her presentation.

Iridescent, through its Technovation program, provides middle- and high-school girls with the resources to develop solutions to real problems in their local communities, "leveraging technology and engineering for social good," as Karla explains.

Over a three-month period, the girls involved in the Technovation program identify a problem within their community, design and develop a mobile app to address the issue, and then build a business around that app, all under the guidance of an industry mentor.

The results are impressive. In one example, a team of hearing-impaired girls in Brazil developed an app that teaches American Sign Language, and then developed a business around it. In another example, a group of high-school girls in Guadalajara, Mexico drew on personal experience to develop an app that strengthens the relationship between Alzheimers patients and their caregivers. And a group of San Francisco Bay area girls created a mobile app that will help those with autism to improve social skills and reduce anxiety.

Want to learn more about the Technovation program, and about how you can get involved? Just listen to this podcast. 

This program was recorded during Karla's presentation at the Women In Technology Breakfast held on October 22, 2018 as part of Oracle Code One.

Additional Resources Coming Soon
  • Baruch Sadogursky, Leonid Igolnik, and Viktor Gamov discuss DevOps, streaming, liquid software, and observability in this podcast captured during Oracle Code One 2018.
  • GraphQL and REST: An Objective Comparison: a panel of experts weighs the pros and cons of each of these approaches in working with APIs. 
  • Database: Breaking the Golden Rules: There comes a time question, and even break,  long-established rules. This program presents a discussion of the database rules that may no longer be advantageous. 
Subscribe

Never miss an episode! The Oracle Groundbreakers Podcast is available via:

OEM Cloud Control 13c – Agent Gold Image

Yann Neuhaus - Wed, 2018-12-05 06:52
Introduction

I am currently setting up a new “Base Image” virtual machine (Red Hat Enterprise Linux 7.6) which will be used to create 6 brand new Oracle database servers requested by a customer. Besides installing and configuring the OS, I also have to install 3 Oracle Homes and one Cloud Control Agent 13c.

An OMS13c server already exists including an Agent patched with the EM-AGENT Bundle Patch 13.2.0.0.181031 (28680866) :
oracle@oms13c:/home/oracle/ [agent13c] opatch lsinventory | grep 28680866
Patch 28680866 : applied on Tue Nov 13 17:32:48 CET 2018
28680866, 28744209, 28298159, 25141245, 28533438, 28651962, 28635152
oracle@oms13c:/home/oracle/ [agent13c]

However, when I wanted to deploy the CC13c Agent on my Master VM from the Cloud Control 13c web interface (Setup > Add Target > Add Targets Manually > Install Agent on Host), the Agent was successfully installed but… without the patch 28680866 :( . That means I will have to install the patch manually. Considering that the goal of creating a “Base Image” VM for this project is to quickly and easily delivering 6 database servers, having to install AND to patch the Agent on each server is not very efficient and doesn’t fit with what I want.
I had so to find a better way to deploy a patched Agent and the solution has been to use an Agent Gold Image. It allowed me to do exactly what I wanted.

In this post I will show how I have set this up.

Deploying the Agent

Here is how we can deploy the Agent on the Base Image VM. From Cloud Control 13c, we click on Setup > Add Target > Add Targets Manually > Install Agent on Host :
1

Then we insert the name of the target VM, we select the approriate platform…
2_2

…and we specify the directory in which we want to install the Agent (Agent Home) :
3_2

Everything is now ready to start the deployment. We can click on Next to see the review of the deployment configuration and on Deploy Agent to start.
Once the Agent is correctly deployed, the status should be like that :
4

As explained above we can see that the Agent is not patched with the Bundle Patch of October 2018 :
oracle@basevm:/u01/app/oracle/agent13c/agent_13.2.0.0.0/OPatch/ [agent13c] ./opatch lsinventory | grep 28680866
oracle@basevm:/u01/app/oracle/agent13c/agent_13.2.0.0.0/OPatch/ [agent13c]

We must patch it manually…

Updating OPatch

Before installing a patch it is highly recommended to update the OPatch utility first. All version of the tool are available here. The current one my VM is 13.8.0.0.0 :
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/ [agent13c] opatch version
OPatch Version: 13.8.0.0.0


OPatch succeeded.

We must use the following command to update OPatch :
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/ [agent13c] unzip -q p6880880_139000_Generic.zip
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/ [agent13c] cd 6880880/
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/6880880/ [agent13c] $ORACLE_HOME/oracle_common/jdk/bin/java -jar ./opatch_generic.jar -silent oracle_home=$ORACLE_HOME
Launcher log file is /tmp/OraInstall2018-11-23_02-58-11PM/launcher2018-11-23_02-58-11PM.log.
Extracting the installer . . . . Done
Checking if CPU speed is above 300 MHz. Actual 2099.998 MHz Passed
Checking swap space: must be greater than 512 MB. Actual 4095 MB Passed
Checking if this platform requires a 64-bit JVM. Actual 64 Passed (64-bit not required)
Checking temp space: must be greater than 300 MB. Actual 27268 MB Passed
Preparing to launch the Oracle Universal Installer from /tmp/OraInstall2018-11-23_02-58-11PM
Installation Summary
[...] [...] Logs successfully copied to /u01/app/oraInventory/logs.
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/6880880/ [agent13c] opatch version
OPatch Version: 13.9.3.3.0


OPatch succeeded.
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/6880880/ [agent13c]

You probably noticed that since OEM 13cR2 the way to update OPatch has changed : no more easy unzip, we have to use a Java file instead (don’t really understand why…).

Patching the Agent

As OPatch is now up to date we can proceed with the installation of the patch 28680866 :
oracle@basevm:/u01/app/oracle/software/agent13c/patch/ [agent13c] unzip -q p28680866_132000_Generic.zip
oracle@basevm:/u01/app/oracle/software/agent13c/patch/ [agent13c] cd 28680866/28680866/
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c] emctl stop agent
Oracle Enterprise Manager Cloud Control 13c Release 2
Copyright (c) 1996, 2016 Oracle Corporation. All rights reserved.
Stopping agent ... stopped.
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c] opatch apply
Oracle Interim Patch Installer version 13.9.3.3.0
Copyright (c) 2018, Oracle Corporation. All rights reserved.


Oracle Home : /u01/app/oracle/agent13c/agent_13.2.0.0.0
Central Inventory : /u01/app/oraInventory
from : /u01/app/oracle/agent13c/agent_13.2.0.0.0/oraInst.loc
OPatch version : 13.9.3.3.0
OUI version : 13.9.1.0.0
Log file location : /u01/app/oracle/agent13c/agent_13.2.0.0.0/cfgtoollogs/opatch/opatch2018-11-23_15-33-14PM_1.log


OPatch detects the Middleware Home as "/u01/app/oracle/agent13c"


Verifying environment and performing prerequisite checks...
OPatch continues with these patches: 28680866


Do you want to proceed? [y|n] y
User Responded with: Y
All checks passed.
Backing up files...
Applying interim patch '28680866' to OH '/u01/app/oracle/agent13c/agent_13.2.0.0.0'


Patching component oracle.sysman.top.agent, 13.2.0.0.0...
Patch 28680866 successfully applied.
Log file location: /u01/app/oracle/agent13c/agent_13.2.0.0.0/cfgtoollogs/opatch/opatch2018-11-23_15-33-14PM_1.log


OPatch succeeded.
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c]

Let’s restart the Agent and check that the patch has been applied :
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c] emctl start agent
Oracle Enterprise Manager Cloud Control 13c Release 2
Copyright (c) 1996, 2016 Oracle Corporation. All rights reserved.
Starting agent ................... started.
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c] opatch lsinventory | grep 28680866
Patch 28680866 : applied on Mon Dec 03 17:17:25 CET 2018
28680866, 28744209, 28298159, 25141245, 28533438, 28651962, 28635152
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c]

Perfect. The Agent is now patched but…

Installing the DB plugin

…what about its plugins ? We can see from the OMS13c server that the Agent doesn’t have the database plugin installed :
oracle@oms13c:/home/oracle/ [oms13c] emcli login -username=sysman
Enter password :


Login successful
oracle@oms13c:/home/oracle/ [oms13c] emcli list_plugins_on_agent -agent_names="basevm.xx.yyyy.com:3872"
The Agent URL is https://basevm.xx.yyyy.com:3872/emd/main/ -
Plug-in Name Plugin-id Version [revision]

Oracle Home oracle.sysman.oh 13.2.0.0.0
Systems Infrastructure oracle.sysman.si 13.2.2.0.0

This is normal. As no Oracle database are currently running on the VM, the DB plugin was not installed automatically during the Agent deployment. We have to install it manually using the following command :
oracle@oms13c:/home/oracle/ [oms13c] emcli deploy_plugin_on_agent -agent_names="basevm.xx.yyyy.com:3872" -plugin=oracle.sysman.db
Agent side plug-in deployment is in progress
Use "emcli get_plugin_deployment_status -plugin=oracle.sysman.db" to track the plug-in deployment status.
oracle@oms13c:/home/oracle/ [oms13c]

To check the status of the plugin installation :
oracle@oms13c:/home/oracle/ [oms13c] emcli get_plugin_deployment_status -plugin=oracle.sysman.db
Plug-in Deployment/Undeployment Status


Destination : Management Agent - basevm.xx.yyyy.com:3872
Plug-in Name : Oracle Database
Version : 13.2.2.0.0
ID : oracle.sysman.db
Content : Plug-in
Action : Deployment
Status : Success
Steps Info:
---------------------------------------- ------------------------- ------------------------- ----------
Step Start Time End Time Status
---------------------------------------- ------------------------- ------------------------- ----------
Submit job for deployment 11/23/18 4:06:29 PM CET 11/23/18 4:06:30 PM CET Success


Initialize 11/23/18 4:06:32 PM CET 11/23/18 4:06:43 PM CET Success


Validate Environment 11/23/18 4:06:44 PM CET 11/23/18 4:06:44 PM CET Success


Install software 11/23/18 4:06:44 PM CET 11/23/18 4:06:45 PM CET Success


Attach Oracle Home to Inventory 11/23/18 4:06:46 PM CET 11/23/18 4:07:04 PM CET Success


Configure plug-in on Management Agent 11/23/18 4:07:05 PM CET 11/23/18 4:07:28 PM CET Success


Update inventory 11/23/18 4:07:23 PM CET 11/23/18 4:07:28 PM CET Success


---------------------------------------- ------------------------- ------------------------- ----------
oracle@oms13c:/home/oracle/ [oms13c]

Quick check :
oracle@oms13c:/home/oracle/ emcli list_plugins_on_agent -agent_names="basevm.xx.yyyy.com:3872"
The Agent URL is https://basevm.xx.yyyy.com:3872/emd/main/ -
Plug-in Name Plugin-id Version [revision]

Oracle Database oracle.sysman.db 13.2.2.0.0
Oracle Home oracle.sysman.oh 13.2.0.0.0
Systems Infrastructure oracle.sysman.si 13.2.2.0.0


oracle@oms13c:/home/oracle/ [oms13c]

The Agent is now exactly in the state in which we want to deploy it on all 6 servers (OPatch up to date, Agent patched, DB plugin installed).
It’s now time to move forward with the creation of an Agent Gold Image.

Creating the Agent Gold image

Going back to Cloud Control we can navigate to Setup > Manage Cloud Control > Gold Agent Images :
Screenshot from 2018-12-03 21-13-14
We click on Manage All Images
6

…then on Create and we give a name to our Image :
7

Once the Image created, we must create its 1st version. We click on the Image name and then on Action > Create. From here we can select the Agent configured earlier on the VM. It will be the source of the Gold Image :
8

The creation of the Gold Agent Image and its 1st version can be also done from command-line with the following emcli command :
oracle@oms13c:/home/oracle/ [oms13c] emcli create_gold_agent_image -image_name="agent13c_gold_image" -version_name="gold_image_v1" -source_agent="basevm.xx.yyyy.com:3872"
A gold agent image create operation with name "GOLD_AGENT_IMAGE_CREATE_2018_12_03_22_04_20_042" has been submitted.
You can track the progress of this session using the command "emcli get_gold_agent_image_activity_status -operation_name=GOLD_AGENT_IMAGE_CREATE_2018_12_03_22_04_20_042"


oracle@oms13c:/home/oracle/ [oms13c] emcli get_gold_agent_image_activity_status -operation_name=GOLD_AGENT_IMAGE_CREATE_2018_12_03_22_04_20_042
Inputs
------
Gold Image Version Name : gold_image_v1
Gold Image Name : agent13c_gold_image
Source Agent : basevm.xx.yyyy.com:3872
Working Directory : %agentStateDir%/install


Status
-------
Step Name Status Error Cause Recommendation
Create Gold Agent Image IN_PROGRESS


oracle@oms13c:/home/oracle/

The Gold Agent Image is now created. We can start to deploy it on the others servers in the same way we did at the first deployment, but by selecting this time With Gold Image :
9

Once the Agent is deployed on the server we can see that OPatch is up to date :
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c] opatch version
OPatch Version: 13.9.3.3.0


OPatch succeeded.
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c]

The Agent Bundle Patch is installed :
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c] opatch lsinventory | grep 28680866
Patch 28680866 : applied on Mon Dec 03 17:17:25 CET 2018
28680866, 28744209, 28298159, 25141245, 28533438, 28651962, 28635152
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c]

And the DB plugin is ready :
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c] ll
total 24
drwxr-xr-x. 31 oracle oinstall 4096 Dec 3 22:59 agent_13.2.0.0.0
-rw-r--r--. 1 oracle oinstall 209 Dec 3 22:32 agentimage.properties
drwxr-xr-x. 8 oracle oinstall 98 Dec 3 22:58 agent_inst
-rw-r--r--. 1 oracle oinstall 565 Dec 3 22:56 agentInstall.rsp
-rw-r--r--. 1 oracle oinstall 19 Dec 3 22:56 emctlcfg.rsp
-rw-r-----. 1 oracle oinstall 350 Dec 3 22:32 plugins.txt
-rw-r--r--. 1 oracle oinstall 470 Dec 3 22:57 plugins.txt.status
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c] cat plugins.txt.status
oracle.sysman.oh|13.2.0.0.0||discoveryPlugin|STATUS_SUCCESS
oracle.sysman.oh|13.2.0.0.0||agentPlugin|STATUS_SUCCESS
oracle.sysman.db|13.2.2.0.0||discoveryPlugin|STATUS_SUCCESS
oracle.sysman.db|13.2.2.0.0||agentPlugin|STATUS_SUCCESS
oracle.sysman.xa|13.2.2.0.0||discoveryPlugin|STATUS_SUCCESS
oracle.sysman.emas|13.2.2.0.0||discoveryPlugin|STATUS_SUCCESS
oracle.sysman.si|13.2.2.0.0||agentPlugin|STATUS_SUCCESS
oracle.sysman.si|13.2.2.0.0||discoveryPlugin|STATUS_SUCCESS
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c]

Conclusion

Using a Gold Image drastically ease the management of OMS Agents in Oracle environments. In addition to allowing massive deployment on targets, it is also possible to manage several Gold Images with different patch levels. The hosts are simply subscribed to a specific Image and follow its life cycle (new patch, new plugins, aso…).

Think about it during your next Oracle monitoring project !

Cet article OEM Cloud Control 13c – Agent Gold Image est apparu en premier sur Blog dbi services.

[Blog] Automatic Workload Repository (AWR): Database Statistics

Online Apps DBA - Wed, 2018-12-05 05:21

Automatic Workload Repository report or AWR report collects, processes, and maintains performance statistics for problem detection and self-tuning purposes. Want to Know more…. Visit: https://k21academy.com/tuning14 and Consider our New Blog on AWR Covering Important Topics like: ✔ The different features of AWR ✔ Snapshots & Baselines in AWR ✔ How to Read AWR Reports and […]

The post [Blog] Automatic Workload Repository (AWR): Database Statistics appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Adding Calculated Fields to Your Visual Builder UI

Shay Shmeltzer - Tue, 2018-12-04 17:16

This is a quick blog to show two techniques for adding calculated fields to an Oracle Visual Builder application.

Both techniques do the calculation on the client side (in the browser). Keep in mind that you might want to consider doing the calculation on the back-end of your application and get the calculated value delivered directly to your client - in some cases this results in better performance. But sometimes you don't have access to modify the backend, or you can't do calculations there, so here we go:

1. For simple calculation you can just use the value property of a field to do the calculation for you.

For example if you need to know the yearly salary you can take the value in a field and just add *12 to it.

You can also use this to calculate values from multiple fields for example [[$current.data.firstName + " " +$current.data.lastName]] - will get you a field with the full name.

2. For more complex calculation you might need to write some logic to arrive at your calculated value, for example if you have multiple if/then conditions. To do that you can create a client side JavaScript function in your page's JS section. Then you refer to the function from your UI component's value attribute using something like {{$functions.myFunc($current.data.salary)}}

As you'll see in the demo, if you switch to the code view of your application the code editor in Oracle VB will give you code insight into the functions you have for your page, helping you eliminate coding errors.

Categories: Development

Pages

Subscribe to Oracle FAQ aggregator