Feed aggregator

Automating DevSecOps for Java Apps with Oracle Developer Cloud

OTN TechBlog - Mon, 2019-04-22 11:32

Looking to improve your application's security? Automating vulnerability reporting helps you prevent attacks that leverage known security problems in code that you use. In this blog we'll show you how to achieve this with Oracle's Developer Cloud.

Most developers rely on third party libraries when developing applications. This helps them reduce the overall development timelines by providing working code for specific needs. But are you sure that the libraries you are using are secure? Are you keeping up to date with the latest reports about security vulnerabilities that were found in those libraries? What about apps that you developed a while back and are still running but might be using older versions of libraries that don't contain the latest security fixes?

DevSecOps aims to integrate security aspects into the DevOps cycle, ideally automating security checks as part of the dev to release lifecycle. The latest release of Oracle Developer Cloud Service - Oracle's cloud based DevOps and Agile team platform - includes a new capability to integrate security check into your DevOps pipelines.

Relying on the public National Vulnerability Database, the new dependency vulnerability analyzer scans the libraries used in your application against the database of known issues, and flags any security risks your app might have based on this data. The current version of DevCS support this for any Maven based Java project. Leveraging the pom files as a source of truth for the list of libraries used in your code.

Vulnerability Analyzer Step

When running the check, you can specify your level of tolerance to issues - for example defining that you are ok with low risk issues, but not with medium to high risk vulnerabilities. When a check finds issues you can fail the build pipeline, send notifications, and in addition add an issue into the issue tracking system provided for free with Developer Cloud.

Check out this demo video to see the process in action.

Having these type of vulnerability scans applied to your platform can save you from situation where hackers leverage publicly known issues and out of date libraries usage to break into your systems. These checks can be part of your regular build cycle, and can also be scheduled to run on a regular basis on systems that have already been deployed - to verify that we keep them up to date with the latest security checks.

 

I gave up my cell phone & laptop for the weekend: This is what I learned

Look Smarter Than You Are - Mon, 2019-04-22 10:10
It was time for a technology detox. When I left work on Good Friday, I left my laptop at the office. I got home at 3PM and put my mobile phone on a charger that I wouldn't see until Monday at 9AM. And my life free of external, involuntary, technological distraction began... along with the stress of being out of touch for the next 3 days. Here's what I learned.

Biggest Lessons
  1. It's really stressful at first, but you get over it.
  2. All those people you told "if it's an emergency, contact my significant other" will not have any emergencies suitable for contacting your significant other.
  3. It will leave you wanting more.
I learned far more about myself and we'll get to that in a second.

Why in the name of God?
Thanks to the cruel "Screen Time" tracking feature of my Apple iPhone, I found that on the average day, I lift up my phone more than 30 times before 11AM every day and then it gets worse from there. In general, I am using my phone 6+ hours per day and many days are a lot worse. I pay more attention to my phone than the people around me: if it's always within arm's reach and I use it for everything. As a CEO, my outward reason for my phone addiction is that I have to be connected: emails and text messages must be dealt with immediately and without my calendar, I might miss a Very Important Meeting. In reality, I am completely addicted to my cell phone and the whole "I have to stay connected" thing is largely rationalization.

But about a week ago, I looked around at the people in my life and realized that we're all addicted: for some of us, it's about communication. Others live in their games. Some people are on Instagram looking at puppies and kittens. Whatever your thing, you're getting it through either your phone or your laptop.

So why take a break? Mostly to find out 1) if I could make it for 42 hours; and 2) what I could learn from the experience. I settled on Easter weekend (April 19-22).

Things I thought I couldn't live without
Texting. According the aforementioned Evil Screen Time, I knew that I spent 1.5 hours a day on text messaging. To be clear, I'm not a tween: my company uses text messaging more than any other communication vehicle, it's how I stay in contact with friends (who has time for phone calls?), and it's about the only way my kids will talk to me.

Email. While texting is great for short communications and quick back-and-forths, I get around 200 non-spam emails on the average day and about 50 on the average weekend. When you have something longer to say or it's not urgent, email is the way to go.

Navigation. I have long since forgotten how to drive without the little blue dot directing me. There are about four places I felt I could find on my own (work, home, airport, grocery store), but I was sure that I would be lost without Google Maps or Waze.

Games. I am level 40 on Pokemon Go (humble brag) and I have played it every day since July 2016. It's literally the only game on my phone, but I have to keep my daily streak going lest... I don't know, actually, but the stress of missing out on my 7-day rewards was seriously getting to me.

Turns out, I didn't miss Pokemon Go, I'm actually a decent driver without a phone (it's like falling off a bike: you never forget how), and if you're off email, you never know what you're missing. I did miss texting, but not in the way I thought I would. So what did I actually miss?

Things I actually missed
Bitmoji. I genuinely missed sending cute pictures around to my friends of me as the Easter Bunny and receiving their pictures dressed up inside Easter eggs. I kept wanting to sneak peeks at my wife's phone to see if she was getting anything cute, though I did manage to resist.

Information. I had forgotten the days when questions didn't have answers. What's the address of Academy Sports? I didn't know, so I just had to drive in the general area where I thought it was. What time does Salata open? No idea, so I drove there and got to wander outside for a bit until they opened for the day (fun fact: stores still post actual opening/closing hours on their front doors!). What time is the movie Little playing at the AMC Grapevine Mills 30? Who won the Texas Rangers game (when in doubt, assume it's the team they're playing against)? Who is the actor that plays that one character in that movie, oh, come on, you know who I'm talking about, that guy, let me just look it up for you, oh, damn, I can't until Monday, FML?

Calendar. I worried all weekend about my schedule for the upcoming week: when was my first appointment on Monday, what did I have scheduled for after work, was there anything I should be preparing for, when was I leaving town next, where was I supposed to be for Memorial Day weekend? It went on-and-on, and it turns out that none of it matters.

Photos. I didn't realize how many photos I take of the world around me, until I couldn't take any photos at all. I had to use a long-forgotten mental trick called "memory." It made me pay a lot more attention to the world around me, and I genuinely remember more of how I experienced the weekend than if I had been trying to catalog everything through pictures. I'm sure photos would have made this blog more appealing, but I'm doing all this from memory, so all we have are words.

Connection. I wanted to know what my friends and family were doing and to let them know I was thinking of them. Without technology, this is almost impossible nowadays. I had to resort to seeing them in-person: I met a couple of them at a restaurant and we got together with another friend for cycling, a movie, and Game of Thrones. But it turns out that those friends - the ones I spent time with in-person - I felt more deeply connected to than before the weekend started. Texting is about surface-level connecting, but facetime (note that this is different than FaceTime) is about bonding.

What changed over the weekend?For one, I spent a lot more time outside. I played frisbee, went on a fourteen-mile bike ride, worked out at the gym, walked around some, went to the mall, saw a movie, and in general, I actually experienced more of the world than I normally do. I also didn't trip over a curb once, because unlike normal, I was looking up the whole time.

I read more instead of looking at my phone each night to fall asleep. I made it 100 pages into a book that I've been meaning to read for a year now. And in the morning I didn't reach for my phone on my bedside table either. I tend to forget how immersed you can get in a book when you don't have notifications popping up constantly telling you what you should be doing instead of reading in peace.

I spent a lot of time with my wife this weekend to the point that she was probably sick of me by Sunday night, but we spent real time with each other without any technological distractions. I finally gave her an Edward Break last night by heading off to take a long bath while reading more of my book (Stealing Snow, if you're curious). She fell asleep and I stayed up reading until midnight.

Any lasting effects?I thought I would be longing for my phone and my laptop (particularly text and emails) at exactly 9AM this morning. I waited until 9AM and opened up my laptop to see what appointment I had at 9AM. It turns out no one needs me - or loves me? - until 10:30AM, so I opened up a browser window to write my first blog entry in many, many months. My cell phone is still face down, and as of 10AM, I still have no idea who texted or emailed me all weekend. I'm blissfully writing away, and I have to admit, I'm not looking forward to going back to my constantly-connected world.

Will giving up your technology addiction for a weekend give you some sort of mystical clarity, a purity of soul that let's you know how the Dalai Lama must feel when he's between text messages? No, but it will help you find out just how addicted you are, and how strong your willpower is. It'll help you understand what you're missing when you're disconnected, and if you're like me, you'll find that in some ways, you actually like it.

Now will I ever do this again? I'll let you know after I log into my email, read all my texts, and see just how bad the world got over the weekend. Until then, I'm blissfully unaware.
Categories: BI & Warehousing

Final Conclusion for 18c Cluster upgrade state is [NORMAL]

Michael Dinh - Sun, 2019-04-21 22:46

Finally, I have reached a point that I can live with for Grid 18c upgrade because the process runs to completion without any error and intervention.

Note that ACFS Volume is created in CRS DiskGroup which may not be ideal for production.

Rapid Home Provisioning Server is configured and is not running.

The outcome is different depending on whether the upgrade is performed via GUI or silent as demonstrated 18c Upgrade Getting to Results – The cluster upgrade state is [NORMAL]

Rene Antunez also demonstrates another method UPGRADE ORACLE GI FROM 12.1 TO 18.5 FAILS AND LEAVES CRS WITH STATUS OF UPGRADE FINAL

While we both encountered the same error “Upgrading RHP Repository failed”, we accomplished the same results via different course of action.

The unexplained and unanswered questions is, “Why RHP Repository is being upgraded?”

Ultimately, it is cluvfy that change for cluster upgrade state and this is shown from gridSetupActions2019-04-21_02-10-47AM.log

INFO: [Apr 21, 2019 2:45:37 AM] Starting 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:45:37 AM] Starting 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:45:37 AM] Executing RHPUPGRADE

INFO: [Apr 21, 2019 2:46:31 AM] Completed 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:46:31 AM] Completed 'Upgrading RHP Repository'

INFO: [Apr 21, 2019 2:46:34 AM] Starting 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:46:34 AM] Starting 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:46:34 AM] Executing CLUVFY
INFO: [Apr 21, 2019 2:46:34 AM] Command /u01/18.3.0.0/grid/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all

INFO: [Apr 21, 2019 2:51:37 AM] Completed Plugin named: cvu
INFO: [Apr 21, 2019 2:51:38 AM] ConfigClient.saveSession method called
INFO: [Apr 21, 2019 2:51:38 AM] Completed 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:51:38 AM] Completed 'Oracle Cluster Verification Utility'

INFO: [Apr 21, 2019 2:51:38 AM] Successfully executed the flow in SILENT mode
INFO: [Apr 21, 2019 2:51:39 AM] inventory location is/u01/app/oraInventory
INFO: [Apr 21, 2019 2:51:39 AM] Exit Status is 0
INFO: [Apr 21, 2019 2:51:39 AM] Shutdown Oracle Grid Infrastructure 18c Installer

I would suggest to run the last step using GUI if feasible versus silent to see what is happening:

/u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp

So how did I get myself into this predicament? I followed blindly. I trust but did not verify.

18.1.0.0 Grid Infrastructure and Database Upgrade steps for Exadata Database Machine running 11.2.0.4 and later on Oracle Linux (Doc ID 2369422.1)

Step 2.1 - Understand how MGMTDB is handled during upgrade

****************************************************************************************************
Upgrading GI 18.1 does not require upgrading MGMTDB nor does it require installing a MGMTDB if it currently does not exist. 
It's the user's discretion to maintain and upgrade the MGMTDB for their application needs.
****************************************************************************************************

Note: MGMTDB is required when using Rapid Host Provisioning. 
The Cluster Health Monitor functionality will not work without MGMTDB configured.
If you consider to install a MGMTDB later,  it is configured to use 1G of SGA and 500 MB of PGA. 
MGMTDB SGA will not be allocated in hugepages (this is because it's init.ora setting 'use_large_pages' is set to false.

The following parameters from (Doc ID 2369422.1) were the root cause for all the issues in my test cases.

Because MGMTDB is not required, it makes sense to set the following but resulted in chaos.

-J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false

How To Setup a Rapid Home Provisioning (RHP) Server and Client (Doc ID 2097026.1)

Starting with Oracle Grid Infrastructure 18.1.0.0.0, when you install Oracle Grid Infrastructure, the Rapid Home Provisioning Server is configured, by default, in the local mode to support the local switch home capability. 

Here is what worked from end to end without any failure or invention.
The response file was ***not*** modified for each of the test cases.

/u01/18.3.0.0/grid/gridSetup.sh -silent -skipPrereqs \
-applyRU /media/patch/Jan2019/28828717 \
-responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp

Here is what the environment looks like after the 18c GI upgrade.

Notice ACFS is configured for RHP.

[oracle@racnode-dc1-1 ~]$ /media/patch/crs_Query.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"
+ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]
+ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-1] is [18.0.0.0.0]
+ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-1 is [2532936542].
+ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2532936542] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 ] have been applied on the local node. The release patch string is [18.5.0.0.0].
+ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2532936542].
+ exit

[oracle@racnode-dc1-1 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"
+ /u01/18.3.0.0/grid/OPatch/opatch lspatches
28864607;ACFS RELEASE UPDATE 18.5.0.0.0 (28864607)
28864593;OCW RELEASE UPDATE 18.5.0.0.0 (28864593)
28822489;Database Release Update : 18.5.0.0.190115 (28822489)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk1"
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

OPatch succeeded.
+ exit

[oracle@racnode-dc1-1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"

[oracle@racnode-dc1-1 ~]$ crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

[oracle@racnode-dc1-1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.CRS.GHCHKPT.advm
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
ora.CRS.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.DATA.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.FRA.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.chad
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.crs.ghchkpt.acfs
               OFFLINE OFFLINE      racnode-dc1-1            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
               OFFLINE OFFLINE      racnode-dc1-2            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
ora.helper
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
ora.net1.network
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.ons
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.proxy_advm
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       racnode-dc1-1            169.254.7.214 172.16
                                                             .9.10,STABLE
ora.asm
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
      2        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.hawk.db
      1        ONLINE  ONLINE       racnode-dc1-1            Open,HOME=/u01/app/o
                                                             racle/12.1.0.1/db1,S
                                                             TABLE
      2        ONLINE  ONLINE       racnode-dc1-2            Open,HOME=/u01/app/o
                                                             racle/12.1.0.1/db1,S
                                                             TABLE
ora.mgmtdb
      1        ONLINE  ONLINE       racnode-dc1-1            Open,STABLE
ora.qosmserver
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.racnode-dc1-1.vip
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.racnode-dc1-2.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.rhpserver
      1        OFFLINE OFFLINE                               STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ srvctl config mgmtdb -all
Database unique name: _mgmtdb
Database name:
Oracle home: <CRS home>
  /u01/18.3.0.0/grid on node racnode-dc1-1
Oracle user: oracle
Spfile: +CRS/_MGMTDB/PARAMETERFILE/spfile.271.1006137461
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Type: Management
PDB name: GIMR_DSCREP_10
PDB service: GIMR_DSCREP_10
Cluster name: vbox-rac-dc1
Management database is enabled.
Management database is individually enabled on nodes:
Management database is individually disabled on nodes:
Database instance: -MGMTDB

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.crs.ghchkpt.acfs -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.crs.ghchkpt.acfs
               OFFLINE OFFLINE      racnode-dc1-1            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
               OFFLINE OFFLINE      racnode-dc1-2            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w 'TYPE = ora.acfs.type' -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.crs.ghchkpt.acfs
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/ghchkpt-61
VOLUME_DEVICE=/dev/asm/ghchkpt-61
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/ghchkpt-61
VOLUME_DEVICE=/dev/asm/ghchkpt-61

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.drivers.acfs -init
NAME=ora.drivers.acfs
TYPE=ora.drivers.acfs.type
TARGET=ONLINE
STATE=ONLINE on racnode-dc1-1

[oracle@racnode-dc1-1 ~]$ mount|egrep -i 'asm|ghchkpt'
oracleasmfs on /dev/oracleasm type oracleasmfs (rw,relatime)

[oracle@racnode-dc1-1 ~]$ acfsutil version
acfsutil version: 18.0.0.0.0

[oracle@racnode-dc1-1 ~]$ acfsutil registry
Mount Object:
  Device: /dev/asm/ghchkpt-61
  Mount Point: /opt/oracle/rhp_images/chkbase
  Disk Group: CRS
  Volume: GHCHKPT
  Options: none
  Nodes: all
  Accelerator Volumes:

[oracle@racnode-dc1-1 ~]$ acfsutil info fs
acfsutil info fs: ACFS-03036: no mounted ACFS file systems

[oracle@racnode-dc1-1 ~]$ acfsutil info storage
Diskgroup      Consumer      Space     Size With Mirroring  Usable Free  %Free   Path
CRS                          59.99              59.99          34.95       58%
DATA                         99.99              99.99          94.76       94%
FRA                          59.99              59.99          59.43       99%
----
unit of measurement: GB

[root@racnode-dc1-1 ~]# srvctl start filesystem -device /dev/asm/ghchkpt-61
PRCA-1138 : failed to start one or more file system resources:
CRS-2501: Resource 'ora.crs.ghchkpt.acfs' is disabled
[root@racnode-dc1-1 ~]#

[oracle@racnode-dc1-1 ~]$ asmcmd -V
asmcmd version 18.0.0.0.0

[oracle@racnode-dc1-1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_diskoting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     61436    35784                0           35784                        Y  CRS/
MOUNTED  EXTERN  N         512             512   4096  4194304    102396    97036                0           97036                        N  DATA/
MOUNTED  EXTERN  N         512             512   4096  4194304     61436    60856                0           60856                        N  FRA/

[oracle@racnode-dc1-1 ~]$ srvctl status rhpserver
Rapid Home Provisioning Server is enabled
Rapid Home Provisioning Server is not running

[oracle@racnode-dc1-1 ~]$ ps -ef|grep [p]mon
oracle    3571     1  0 02:40 ?        00:00:03 mdb_pmon_-MGMTDB
oracle   17109     1  0 Apr20 ?        00:00:04 asm_pmon_+ASM1
oracle   17531     1  0 Apr20 ?        00:00:06 ora_pmon_hawk1
[oracle@racnode-dc1-1 ~]$

Let me show you how this is convoluted.
In my case, it’s easy because there were only 2 actions performed.
Do you know what GridSetupAction was performed based on the directory name?

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs
$ ls -ld G*
drwxrwx--- 3 oracle oinstall 4096 Apr 21 18:59 GridSetupActions2019-04-20_06-51-48PM
drwxrwx--- 2 oracle oinstall 4096 Apr 21 18:56 GridSetupActions2019-04-21_02-10-47AM

This is how you can find out.

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs
$ ls -ld G*
drwxrwx--- 3 oracle oinstall 4096 Apr 21 19:20 GridSetupActions2019-04-20_06-51-48PM
drwxrwx--- 2 oracle oinstall 4096 Apr 21 19:22 GridSetupActions2019-04-21_02-10-47AM

================================================================================
### gridSetup.sh -silent -skipPrereqs -applyRU
================================================================================
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-20_06-51-48PM
$ ll
total 13012
-rw-r----- 1 oracle oinstall   20562 Apr 20 19:09 AttachHome2019-04-20_06-51-48PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall       0 Apr 20 18:59 gridSetupActions2019-04-20_06-51-48PM.err
-rw-r----- 1 oracle oinstall 7306374 Apr 20 19:09 gridSetupActions2019-04-20_06-51-48PM.log
-rw-r----- 1 oracle oinstall 2374182 Apr 20 19:09 gridSetupActions2019-04-20_06-51-48PM.out
-rw-r----- 1 oracle oinstall 3582408 Apr 20 18:59 installerPatchActions_2019-04-20_06-51-48PM.log
-rw-r----- 1 oracle oinstall       0 Apr 20 19:02 oraInstall2019-04-20_06-51-48PM.err
-rw-r----- 1 oracle oinstall       0 Apr 20 19:09 oraInstall2019-04-20_06-51-48PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall     157 Apr 20 19:02 oraInstall2019-04-20_06-51-48PM.out
-rw-r----- 1 oracle oinstall      29 Apr 20 19:09 oraInstall2019-04-20_06-51-48PM.out.racnode-dc1-2
drwxrwx--- 2 oracle oinstall    4096 Apr 20 19:01 temp_ob
-rw-r----- 1 oracle oinstall   12467 Apr 20 19:09 time2019-04-20_06-51-48PM.log

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-20_06-51-48PM
$ grep ROOTSH_LOCATION gridSetupActions2019-04-20_06-51-48PM.log
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/root.sh'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/root.sh'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/rootupgrade.sh'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/root.sh'. Received the value from a code block.

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-20_06-51-48PM
$ grep "Execute Root Scripts successful" time2019-04-20_06-51-48PM.log
 # Execute Root Scripts successful. # 3228 # 1555780156914
 # Execute Root Scripts successful. # 3228 # 1555780156914
 # Execute Root Scripts successful. # 3228 # 1555780156914

================================================================================
### gridSetup.sh -executeConfigTools -silent
================================================================================
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-21_02-10-47AM
$ ll
total 1116
-rw-r----- 1 oracle oinstall       0 Apr 21 02:10 gridSetupActions2019-04-21_02-10-47AM.err
-rw-r----- 1 oracle oinstall  122568 Apr 21 02:51 gridSetupActions2019-04-21_02-10-47AM.log
-rw-r----- 1 oracle oinstall 1004378 Apr 21 02:51 gridSetupActions2019-04-21_02-10-47AM.out
-rw-r----- 1 oracle oinstall     129 Apr 21 02:10 installerPatchActions_2019-04-21_02-10-47AM.log
-rw-r----- 1 oracle oinstall    3155 Apr 21 02:51 time2019-04-21_02-10-47AM.log

oracle@racnode-dc1-1:hawk1:/u01/app/oraInventory/logs/GridSetupActions2019-04-21_02-10-47AM
$ grep rhprepos *
gridSetupActions2019-04-21_02-10-47AM.log:INFO:  [Apr 21, 2019 2:45:37 AM] Command /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-21_02-10-47AM
$ grep executeSelectedTools gridSetupActions2019-04-21_02-10-47AM.log
INFO:  [Apr 21, 2019 2:11:37 AM] Entering ConfigClient.executeSelectedToolsInAggregate method
INFO:  [Apr 21, 2019 2:11:37 AM] ConfigClient.executeSelectedToolsInAggregate oAggregate=oracle.crs:oracle.crs:18.0.0.0.0:common
INFO:  [Apr 21, 2019 2:11:37 AM] ConfigClient.executeSelectedToolsInAggregate action assigned
INFO:  [Apr 21, 2019 2:51:38 AM] ConfigClient.executeSelectedToolsInAggregate action performed
INFO:  [Apr 21, 2019 2:51:38 AM] Exiting ConfigClient.executeSelectedToolsInAggregate method

It might be better to use GUI if available but be careful.

For OUI installations or execution of critical scripts it is recommend to use VNC to avoid problems in case connection with the server is lost.

I was using X and connection was lost during the upgrade. It was a kiss of death with this being the last screen capture.

Rene’s quote:

After looking for information in MOS, there wasn’t much that could lead me on how to solve the issue, just a lot of bugs related to the RHP repository.

I was lucky enough to get on a call with a good friend (@_rickgonzalez ) who is the PM of the RHP and we were able to work through it. So below is what I was able to do to solve the issue.

Also it was confirmed by them , that this is a bug in the upgrade process of 18.X, so hopefully they will be fixing it soon.

I concur and conclude, the process for GI 18c Upgrade is overly complicated, convoluted, contradicting, and not clearly documented, all having to do with MGMTDB and Rapid Home Provisioning (RHP) repository.

Unless you’re lucky or know someone, good luck with your upgrade.

Lastly, it would be greatly appreciated if you would share your upgrade experiences and/or results.

Did you use GUI or silent?

Oracle Ksplice introduces Known Exploit Detection functionality

Wim Coekaerts - Sat, 2019-04-20 12:04

The Oracle Ksplice team has added some really cool new functionality in Oracle Ksplice. Instead of writing and copying the blog pretty much, just go directly to the source:

It's unique, it's awesome, it's part of Oracle Linux premier subscription and it's included in Oracle Cloud instances at no extra cost for all customers using Oracle Linux. 

https://blogs.oracle.com/linux/using-ksplice-to-detect-exploit-attempts

 

Understanding Nested Lists Dictionaries of JSON in Python and AWS CLI

Pakistan's First Oracle Blog - Sat, 2019-04-20 03:01

After lots of hair pulling, bouts of frustration, I was able to grasp this nested list and dictionary thingie in JSON output of AWS cli commands such as describe-db-instances and others. If you run the describe-db-instances for rds or describe-instances for ec2, you get a huge pile of JSON mumbo-jumpo with all those curly and square brackets studded with colons and commas. The output is heavily nested.


For example, if you do :

aws rds describe-db-instances

you get all the information but heavily nested within. Now if you only want to extract or iterate through, say VPCSecurityGroupId of all database instances, then you have to traverse all that nested information which comprises of dictionary consisting of keys which have values as arrays and those arrays have more dictionaries and so on.

After the above rant, let me try to ease the pain a bit by explaining this. For clarity, I have just taken out following chunk from describe-db-instance output. Suppose, the only thing you are interested in is the value of VpcSecurityGroupId from  following chunk:

mydb=rds.describe_db_instances(DBInstanceIdentifier=onedb)
mydb= {'DBInstances':
          [
            {'VpcSecurityGroups': [ {'VpcSecurityGroupId': 'sg-0ed48bab1d54e9554', 'Status': 'active'}]}
          ]
       }

The variable mydb is a dictionary with key DBInstances. This key DBInstances has an array as its value. Now the first item of that array is another dictionary and the first key of that dictionary is VpcSecurityGroups. Now the value this key VpcSecurityGroups another array. This another array's first item is again a dictionary. This last dictionary has a key VpcSecurityGroupId and we want value of this key.

If your head has stopped spinning, then read on and stop cursing me as I am going to demystify it now.

If you want to print that value just use following command:

mydb['DBInstances'][0]['VpcSecurityGroups'][0]['VpcSecurityGroupId']

So the secret is that if its a dictionary, then use key name and if its an array then use index and keep going. That's all there is to it. Full code to print this using Python, boto3 etc is as follows:

import boto3
import click

rds = boto3.client('rds',region_name='ap-southeast-2')
dbs = rds.describe_db_instances()

@click.group()
def cli():
    "Gets RDS data"
    pass

@cli.command('list-database')
@click.argument('onedb')
def list_database(onedb):
    "List info about one database"
    mydb=rds.describe_db_instances(DBInstanceIdentifier=onedb)
    #print(mydb)
    #Following line only prints value of VpcSecurityGroupId of RDS instance
    print(mydb['DBInstances'][0]['VpcSecurityGroups'][0]['VpcSecurityGroupId'])
    #Following line only prints value of OptionGroup of RDS instance
    print(mydb['DBInstances'][0]['OptionGroupMemberships'][0]['OptionGroupName'])
    #Following line only prints value of Parameter Group of RDS instance
    print(mydb['DBInstances'][0]['DBParameterGroups'][0]['DBParameterGroupName'])

if __name__ == '__main__':
    cli()


I hope that helps. If you know any easier way, please do favor and let us know in comments. Thanks.

Categories: DBA Blogs

Economics and Innovations of Serverless

OTN TechBlog - Fri, 2019-04-19 13:08

The term serverless has been one of the biggest mindset changes since the term cloud, and learning how to “think serverless” should be part of every developers cloud-native journey. This is why one of Oracle’s 10 Predictions for Developers in 2019 is “The Economics of Serverless Drives Innovation on Multiple Fronts”. Let’s unpack what we mean by economics and innovation while covering a few common misconceptions.

The Economics

Cost is only part of the story

I often hear “cost reduction” as a key driver of serverless architectures. Everyone wants to save money and be a hero for their organization. Why pay for a full time server when you can pay per function millisecond? The ultimate panacea of utility computing — pay for exactly what you need and no more. This is only part of the story.

Economics is a broad term for the production, distribution, and consumption of things. Serverless is about producing software. And software is about using computers as leverage to produce non-linear value. Facebook (really MySpace) leveraged software to change the way the world connected. Uber leveraged software to transform the transportation industry. Netflix leveraged software to change the way the world consumed movies. Software is transforming every major company in every major industry, and for most, is now at the heart of how they deliver value to end users. So why the fuss about serverless?

Serverless is About Driving Non-Linear Value

Because serverless is ultimately about driving non-linear business value which can fundamentally change the economics of your business. I’ve talked about this many times , but Ben nails it — “serverless is a ladder. You’re climbing to some nirvana where you get to deliver pure business value with no overhead.”

Pundits point out that “focus on business value” has been said many times over the years, and they’re right. But every software architecture cycle learns from past cycles and incorporates new ways to achieve this goal of greater focus, which is why serverless is such an important cycle to watch. It effectively incorporates the promise (and best) of cloud with the promise (and learnings) of SOA .

Ultimately the winning businesses reduce overhead while increasing value to their customers by empowering their developers. That’s why the economics are too compelling to ignore. Not because your CRON job server goes from $30 to $0.30/month (although a nice use case), but because creating a culture of innovation and focus on driving business value is a formula for success.

So we can’t ignore the economics. Let’s move to the innovations.

The Innovations

The tech industry is in constant motion. Apps, infrastructure, and the delivery process drive each other forward together in a ping-pong fashion. Here are a few of the key areas to watch that are contributing to forward movement in the innovation cycle, as illustrated in the “Digital Trialectic”:

Depth of Services

The web is fundamentally changing how we deliver services. We’re moving towards an “everything-as-a-service” world where important bits of functionality can be consumed by simply calling an API. Programming is changing, and this is driven largely by the depth of available services to solve problems that once plagued developers working hours.

Twilio now removes the need for SMS, voice, and now email (acquired Sendgrid) code and infrastructure. Google’s Cloud Vision API removes the need for complex object and facial detection code and infrastructure. AWS’s Ground Station removes the need for satellite communications code and infrastructure (finally?), and Oracle’s Autonomous Database replaces your existing Oracle Database code and infrastructure.

Pizzas, weather, maps, automobile data, cats – you have an endless list of things accessible across simple API calls.

Open Source

As always, serverless innovation is happening in the world of open source as well, many of which end up as part of the list of services above. The Fn Project is fully open source code my team is working on which will allow anyone to run their own serverless infrastructure on any cloud, starting with Functions-as-a-service and moving towards things like workflow as well. Come say hi in our Slack.

But you can get to serverless faster with the managed Fn service, Oracle Functions. And there are other great industry efforts as well including Knative by Google, OpenFaas by Alex Ellis, and OpenWhisk by IBM.

All of these projects focus mostly on the compute aspect of a serverless architecture. There are many projects that aim to make other areas easier such as storage, networking, security, etc, and all will eventually have their own managed service counterparts to complete the picture. The options are a bit bewildering, which is where standards can help.

Standards

With a paradox of choice emerging in serverless, standards aim to ease the pain in providing common interfaces across projects, vendors, and services. The most active forum driving these standards is the Serverless Working Group, a subgroup of the Cloud Native Compute Foundation. Like cats and dogs living together, representatives from almost every major vendor and many notable startups and end users have been discussing how to “harmonize” the quickly-moving serverless space. CloudEvents has been the first major output from the group, and it’s a great one to watch. Join the group during the weekly meetings, or face-to-face at any of the upcoming KubeCon’s.

Expect workflow, function signatures, and other important aspects of serverless to come next. My hope is that the group can move quickly enough to keep up with the quickly-moving space and have a material impact on the future of serverless architectures, further increasing the focus on business value for developers at companies of all sizes.

A Final Word

We’re all guilty of skipping to the end in long posts. So here’s the net net: serverless is the next cycle of software architecture, its roots and learnings coming from best-of SOA and cloud. Its aim is to change the way in which software is produced by allowing developers to focus on business value, which in turn drives non-linear business value. The industry is moving quickly with innovation happening through the proliferation of services, open source, and ultimately standards to help harmonize this all together.

Like anything, the best way to get started is to just start. Pick your favorite cloud, and start using functions. You can either install Fn manually or sign up for early access to Oracle Functions.

If you don’t have an Oracle Cloud account, take a free trial today.

Oracle VM Server: Working with ovm cli

Dietrich Schroff - Fri, 2019-04-19 06:01
After getting the ovmcli run, here some commands which are quite helpful, when you are working with Oracle VM server.
But first:
Starting the ovmcli is done via
ssh admin@localhost -p 10000
at the OVM Manager.

After that you can get some overviews:
OVM> list server
Command: list server
Status: Success
Time: 2019-01-25 06:56:55,065 EST
Data:
  id:18:e2:a6:9d:5c:b6:48:3a:9b:d2:b0:0f:56:7e:ab:e9  name:oraclevm
OVM> list vm
Command: list vm
Status: Success
Time: 2019-01-25 06:56:57,357 EST
Data:
  id:0004fb0000060000fa3b1b883e717582  name:myAlpineLinux
OVM> list ServerPool
Command: list ServerPool
Status: Success
Time: 2019-01-25 06:57:12,165 EST
Data:
  id:0004fb0000020000fca85278d951ce27  name:MyServerPool
A complete list of all list commands can be obtained like this:
OVM> list ?
          AccessGroup
          AntiAffinityGroup
          Assembly
          AssemblyVirtualDisk
          AssemblyVm
          BondPort
          ControlDomain
          Cpu
          CpuCompatibilityGroup
          FileServer
          FileServerPlugin
          FileSystem
          Job
          Manager
          Network
          PeriodicTask
          PhysicalDisk
          Port
          Repository
          RepositoryExport
          Server
          ServerController
          ServerPool
          ServerPoolNetworkPolicy
          ServerUpdateGroup
          ServerUpdateRepository
          StorageArray
          StorageArrayPlugin
          StorageInitiator
          Tag
          VirtualAppliance
          VirtualApplianceVirtualDisk
          VirtualApplianceVm
          VirtualCdrom
          VirtualDisk
          VlanInterface
          Vm
          VmCloneCustomizer
          VmCloneNetworkMapping
          VmCloneStorageMapping
          VmDiskMapping
          Vnic
          VolumeGroup
An overview which kind of command can be used like list:
OVM> help
For Most Object Types:
    create [(attribute1)="value1"] ... [on ]
    delete
    edit   (attribute1)="value1" ...
    list
    show
For Most Object Types with Children:
    add to
    remove from
Client Session Commands:
    set alphabetizeAttributes=[Yes|No]
    set commandMode=[Asynchronous|Synchronous]
    set commandTimeout=[1-43200]
    set endLineChars=[CRLF,CR,LF]
    set outputMode=[Verbose,XML,Sparse]
    showclisession
Other Commands:
    exit
    showallcustomcmds
    showcustomcmds
    showobjtypes
    showversion
If you want to get you vm.cfg file, you can use the id from "list vm" and type:
OVM> getVmCfgFileContent Vm id=0004fb0000060000fa3b1b883e717582
Command: getVmCfgFileContent Vm id=0004fb0000060000fa3b1b883e717582
Status: Success
Time: 2019-01-25 06:59:46,875 EST
Data:
  OVM_domain_type = xen_pvm
  bootargs =
  disk = [file:/OVS/Repositories/0004fb0000030000dad74d9c43176d2e/ISOs/0004fb0000150000226a713414eaa501.iso,xvda:cdrom,r,file:/OVS/Repositories/0004fb0000030000dad74d9c43176d2e/VirtualDisks/0004fb0000120000f62a7bba83063840.img,xvdb,w]
  bootloader = /usr/bin/pygrub
  vcpus = 1
  memory = 512
  on_poweroff = destroy
  OVM_os_type = Other Linux
  on_crash = restart
  cpu_weight = 27500
  OVM_description =
  cpu_cap = 0
  on_reboot = restart
  OVM_simple_name = myAlpineLinux
  name = 0004fb0000060000fa3b1b883e717582
  maxvcpus = 1
  vfb = [type=vnc,vncunused=1,vnclisten=127.0.0.1,keymap=en-us]
  uuid = 0004fb00-0006-0000-fa3b-1b883e717582
  guest_os_type = linux
  OVM_cpu_compat_group =
  OVM_high_availability = false
  vif = []
Very helpful is the Oracle documentation (here).


Creating A Microservice With Micronaut, GORM And Oracle ATP

OTN TechBlog - Thu, 2019-04-18 12:56

Over the past year, the Micronaut framework has become extremely popular. And for good reason, too. It's a pretty revolutionary framework for the JVM world that uses compile time dependency injection and AOP that does not use any reflection. That means huge gains for your startup and runtime performance and memory consumption. But it's not enough to just be performant, a framework has to be easy to use and well documented. The good news is, Micronaut is both of these. And it's fun to use and works great with Groovy, Kotlin and GraalVM. In addition, the people behind Micronaut understand the direction that the industry is heading and have built the framework with that direction in mind. This means that things like Serverless and Cloud deployments are easy and there are features that provide direct support for them.  

In this post we'll look at how to create a Microservice with Micronaut which will expose a "Person" API. The service will utilize GORM which is a "data access toolkit" - a fancy way of saying it's a really easy way to work with databases (from traditional RDBMS to MongoDB, Neo4J and more). Specifically, we'll utilize GORM for Hibernate to interact with an Oracle Autonomous Transaction Processing DB. Here's what we'll be doing:

  1. Create the Micronaut application with Groovy support
  2. Configure the application to use GORM connected to an ATP database.
  3. Create a Person model
  4. Create a Person service to perform CRUD operations on the Person model
  5. Create a controller to interact with the Person service

First things first, make sure you have an Oracle ATP instance up and running. Luckily, that's really easy to do and this post by my boss Gerald Venzl will show you how to set up an ATP instance in less than 5 minutes. Once you have a running instance, grab a copy of your Client Credentials "Wallet" and unzip it somewhere on your local system.

Before we move on to the next step, create a new schema in your ATP instance and create a single table using the following DDL:

You're now ready to move on to the next step, creating the Micronaut application.

Create The Micronaut Application

If you've never used it before, you'll need to install Micronaut which includes a helpful CLI for scaffolding certain elements like the application itself and controllers, etc as you work with your application. Once you've confirmed the install, run the following command to generate your basic application:

Take a look inside that directory to see what the CLI has generated for you. 

As you can see, the CLI has generated a Gradle build script, a Dockerfile and some other config files as well as a `src` directory. That directory looks like this:

At this point you can import the application into your favorite IDE, so do that now. The next step is to generate a controller:

We'll make one small adjustment to the generated controller, so open it up and add the `@CompileStatic` annotation to the controller. It should like so once you're done:

Now run the application using `gradle run` (we can also use the Gradle wrapper with `./gradlew run`) and our application will start up and be available via the browser or a simple curl command to confirm that it's working.  You'll see the following in your console once the app is ready to go:

Give it a shot:

We aren't returning any content, but we can see the '200 OK' which means the application received the request and returned the appropriate response.

To make things easier for development and testing the app locally I like to create a custom Run/Debug configuration in my IDE (IntelliJ IDEA) and point it at a custom Gradle task. We'll need to pass in some System properties eventually, and this enables us to do that when launching from the IDE. Create a new task in `build.gradle` named `myTask` that looks like so:

Now create a custom Run/Debug configuration that points at this task and add the VM options that we'll need later on for the Oracle DB connection:

Here are the properties we'll need to populate for easier copy/pasting:

Let's move to the next step and get the application ready to talk to ATP!

Configure The Application For GORM and ATP

Before we can configure the application we need to make sure we have the Oracle JDBC drivers available. Download them, create a directory called `libs` in the root of your application and place them there.  Make sure that you have the following JARs in the `libs` directory:

Modify your `dependencies` block in your `build.gradle` file so that the Oracle JDB JARs and the `micronaut-hibernate-gorm` artifacts are included as dependencies:

Now let's modify the file located at `src/main/resources/application.yml` to configure the datasource and Hibernate.  

Our app is now ready to talk to ATP via GORM, so it's time to create a service, model and some controller methods! We'll start with the model.

Creating A Model

GORM models are super easy to work with.  They're just POGO's (Plain Old Groovy Objects) with some special annotations that help identify them as model entities and provide validation via the Bean Validation API. Let's create our `Person` model object by adding a Groovy class called 'Person.groovy' in a new directory called `model`.  Populate the model as such:

Take note of a few items here. We've annotated the class with @Entity (`grails.gorm.annotation.Entity`) so GORM knows that this is an entity it needs to manage. Our model has 3 properties: firstName, lastName and isCool. If you look back at the DDL we used to create the `person` table above you'll notice that we have two additional columns that aren't addressed in the model: ID and version. The ID column is implicit with a GORM entity and the version column is auto-managed by GORM to handle optimistic locking on entities. You'll also notice a few annotations on the properties which are used for data validation as we'll see later on.

We can start the application up again at this point and we'll see that GORM has identified our entity and Micronaut has configured the application for Hibernate:

Let's move on to creating a service.

Creating A Service

I'm not going to lie to you. If you're waiting for things to get difficult here, you're going to be disappointed. Creating the service that we're going to use to manage `Person` CRUD operations is really easy to do. Create a Groovy class called `PersonService` in a new directory called `service` and populate it with the following:

That's literally all it takes. This service is now ready to handle operations from our controller. GORM is smart enough to take the method signatures that we've provided here and implement the methods. The nice thing about using an abstract class approach (as opposed to using the interface approach) is that we can manually implement the methods ourselves if we have additional business logic that requires us to do so.

There's no need to restart the application here, as we've made no changes that would be visible at this point. We're going to need to modify our controller for that, so let's create one!

Creating A Controller

Lets modify the `PersonController` that we created earlier to give us some endpoints that we can use to do some persistence operations. First, we'll need to inject our PersonService into the controller.  This too is straightforward by simply including the following just inside our class declaration:

The first step in our controller should be a method to save a `Person`.  Let's add a method annotated with `@Post` to handle this and within the method we'll call the `PersonService.save()` method.  If things go well, we'll return the newly created `Person`, if not we'll return a list of validation errors. Note that Micronaut will bind the body of the HTTP request to the `person` argument of the controller method meaning that inside the method we'll have a fully populated `Person` bean to work with.

If we start up the application we are now able to persist a `Person` via the `/person/save` endpoint:

Note that we've received a 200 OK response here with an object containing our `Person`.  However, if we tried the operation with some invalid data, we'd receive some errors back:

Since our model (very strangely) indicated that the `Person` firstName must be between 5 and 50 characters we receive a 422 Unprocessable Entity response that contains an array of validation errors back with this response.

Now we'll add a `/list` endpoint that users can hit to list all of the Person objects stored in the ATP instance. We'll set it up with two optional parameters that can be used for pagination.

Remember that our `PersonService` had two signatures for the `findAll` method - one that accepted no parameters and another that accepted a `Map`.  The Map signature can be used to pass additional parameters like those used for pagination.  So calling `/person/list` without any parameters will give us all `Person` objects:

Or we can get a subset via the pagination params like so:

We can also add a `/person/get` endpoint to get a `Person` by ID:

And a `/person/delete` endpoint to delete a `Person`:

Summary

We've seen here that Micronaut is a simple but powerful way to create performant Microservice applications and that data persistence via Hibernate/GORM is easy to accomplish when using an Oracle ATP backend.  Your feedback is very important to me so please feel free to comment below or interact with me on Twitter (@recursivecodes).

If you'd like to take a look at this entire application you can view it or clone via Github.

Oracle ACEs at APEX Connect 2019, May 7-9 in Bonn

OTN TechBlog - Thu, 2019-04-18 11:36

APEX Connect 2019, the annual conference organized by DOAG (the German Oracle Applications User Group) will be held May 7-9, 2019 in Bonn, Germany. The event features a wide selection of sessions and events, covering APEX, PL and PL/SQL, and JavaScript.  Among the session speakers are the following members of the Oracle ACE Program:

Oracle ACE Director Nils de BruijinNiels de Bruijn
Business Unit Manager APEX, MT AG
Cologne, Germany

 

 

 

Oracle ACE Director Roel HartmanRoel Hartman
Director/Senior APEX Developer, APEX Consulting
Apeldoorn, Netherlands

 

 

Oracle ACE Director Heli HelskyahoHeli Helskyaho
CEO, Miracle Finland Oy
Finland

 

 

 

Oracle ACE Director John Edward ScottJohn Edward Scott
Founder, APEX Evangelists
West Yorkshire, United Kingdom

 

 

Oracle ACE Director Kamil StawiarskiKamil Stawiarski
Owner/Partner, ORA-600
Warsaw, Poland

 

 

Oracle ACE Director Martin WidlakeMartin Widlake
Database Architect and Performance Specialist, ORA600
Essex, United Kingdom

 

 

Oracle ACE Alan ArentsenAlan Arentsen
Senior Oracle Developer, Arentsen Database Consultancy
Breda, Netherlands

 

 

Oracle ACE Tobias ArnholdTobias Arnhold
Freelance APEX Developer, Tobias Arnhold IT Consulting
Germany

 

 

Oracle ACE Dietmar AustDietmar Aust
Owner, OPAL UG
Cologne, Germany

 

 

Oracle ACE Kai DonatoKai Donato
Senior Consultant for Oracle APEX Development, MT AG
Cologne, Germany

 

 

Oracle ACE Daniel HochleitnerDaniel Hochleitner
Freelance Oracle APEX Developer and Consultant
Regensburg, Germany

 

 

Oracle ACE Oliver LemmOliver Lemm
Business Unit Manager, MT AG
Cologne, Germany

 

 

Oracle ACE Richard MartensRichard Martens
Co-Owner, SMART4Solutions B.V.
Tilburg, Netherlands

 

 

Oracle ACE Robert MarzRobert Marz
Principal Technical Architect, its-people GmbH
Frankfurt, Germany

 

 

Oracle ACE Matt MulvaneyMatt Mulvaney
Senior Development Consultant, Explorer UK LTD
Leeds, United Kingdom

 

 

Oracle ACE Christian RokittaChristian Rokitta
Managing Partner, iAdvise
Breda, Netherlands

 

 

Oracle ACE Phillip SalvisbergPhilipp Salvisberg
Senior Principal Consultant, Trivadis AG
Zürich, Switzerland

 

 

Oracle ACE Sven-Uwe WellerSven-Uwe Weller
Syntegris Information Solutions GmbH
Germany

 

 

Oracle ACE Associate Carolin HagemannCarolin Hagemann
Hagemann IT Consulting
Hamburg, Germany

 

 

Oracle ACE Associate Moritz KleinMoritz Klein
Senior APEX Consultant, MT AG
Frankfurt, Germany

 

Additional Resources

Migrating Oracle Database & Non Oracle Database to Oracle Cloud

You can directly move / migrate various source databases into different target cloud deployments running the Oracle Cloud. Oracle automated tools for migration will move on premise database to the...

We share our skills to maximize your revenue!
Categories: DBA Blogs

CubeViewer - Process to Build the Cube Viewer

Anthony Shorten - Wed, 2019-04-17 18:32

As pointed out in the last post, the Cube Viewer is a new way of displaying data for advanced analysis. The Cube Viewer functionality extends the existing ConfigTools (a.k.a Task Optimization) objects to allow the analysis to be defined as a Cube Type and Cube View. Those definitions are used by the widget to display correctly and define what level of interactivity the user can enjoy.

Note: Cube Viewer is available in Oracle Utilities Application Framework V4.3.0.6.0 and above.

The process of building a cube introduces new concepts and new objects to ConfigTools to allow for an efficient method of defining the analysis and interactivity. In summary form the process is described by the figure below:

Cube View Process

  • Design Your Cube. Decide the data and related information to to be used in the Cube Viewer for analysis. This is not just a typical list of values but a design of dimensions, filters and values. This is an important step as it helps determine whether the Cube Viewer is appropriate for the data to be analyzed.
  • Design Cube SQL. Translating the design into a Cube based SQL. This SQL statement is formatted specifically for use in a cube.
  • Setup Query Zone. The SQL statement designed in the last step needs to be defined in a ConfigTools Query Zone for use in the Cube Type later in the process. This also allows for the configuration of additional information not contained in the SQL to be added to the Cube.
  • Setup Business Service. The Cube Viewer requires a Business Service based upon the standard FWLZDEXP application service. This is also used by the Cube Type later process.
  • Setup Cube Type. Define a Cube Type object defining the Query Zone, Business Service and other settings to be used by the Cube Viewer at runtime. This  brings all the configuration together into a new ConfigTools object.
  • Setup Cube View. Define an instance of the Cube Type with the relevant predefined settings for use in the user interface as a Cube View object. Appropriate users can use this as the initial view into the cube and use it as a basis for any Saved Views they want to implement.

Over the next few weeks, a number of articles will be available to outline each of these steps to help you understand the feature and be on your way to building your own cubes.

Oracle Database 19c download

Dietrich Schroff - Wed, 2019-04-17 15:20
In january 2019 Oracle released the documentation for Oracle Database 19c.

More than 7 weeks later there is still nothing at https://www.oracle.com/downloads/:


The gap between release date of the documentation and the on premises software was for 18c not so long...

Will 19c on premises software be released before may? Or later in summer?

ORA-22835: Buffer too small for CLOB to CHAR or BLOB to RAW conversion (actual: 6843, maximum: 2000)

Tom Kyte - Wed, 2019-04-17 13:26
HI iam using below query to read xml from blob and find the string but facing error buffer to small ora-22835 blob to raw conversion (actual 15569,mximum 2000) please help me out with below example <code> SELECT XMLTYPE (UTL_RAW.cast_to_varchar...
Categories: DBA Blogs

Merge two rows into one row

Tom Kyte - Wed, 2019-04-17 13:26
Hi Tom, I seek your help on how to compare two rows in the table and if they are same merge the rows. <code>create table test(id number, start_date date, end_date date, col1 varchar2(10), col2 varchar2(10), col3 varchar2(10)); insert into t...
Categories: DBA Blogs

Example of coe_xfr_sql_profile force_match TRUE

Bobby Durrett's DBA Blog - Wed, 2019-04-17 10:57

Monday, I used the coe_xfr_sql_profile.sql script from Oracle Support’s SQLT scripts to resolve a performance issue. I had to set the parameter force_match to TRUE so that the SQL Profile I created would apply to all SQL statements with the same FORCE_MATCHING_SIGNATURE value.

I just finished going off the on-call rotation at 8 am Monday and around 4 pm on Monday a coworker came up to me with a performance problem. A PeopleSoft Financials job was running longer than it normally did. Since it had run for several hours, I got an AWR report of the last hour and looked at the SQL ordered by Elapsed Time section and found a number of similar INSERT statements with different SQL_IDs.

The inserts were the same except for certain constant values. So, I used my fmsstat2.sql script with ss.sql_id = ’60dp9r760ja88′ to get the FORCE_MATCHING_SIGNATURE value for these inserts. Here is the output:

FORCE_MATCHING_SIGNATURE SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------------------ ------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
     5442820596869317879 60dp9r760ja88         3334601 15-APR-19 05.00.34.061 PM                1         224414.511     224412.713         2.982                  0                      0                   .376             5785269                 40                   3707

Now that I had the FORCE_MATCHING_SIGNATURE value 5442820596869317879 I reran fmsstat2.sql with ss.FORCE_MATCHING_SIGNATURE = 5442820596869317879 instead of ss.sql_id = ’60dp9r760ja88′ and got all of the insert statements and their PLAN_HASH_VALUE values. I needed these to use coe_xfr_sql_profile.sql to generate a script to create a SQL Profile to force a better plan onto the insert statements. Here is the beginning of the output of the fmsstat2.sql script:

FORCE_MATCHING_SIGNATURE SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------------------ ------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
     5442820596869317879 0yzz90wgcybuk      1314604389 14-APR-19 01.00.44.945 PM                1            558.798        558.258             0                  0                      0                      0               23571                  0                    812
     5442820596869317879 5a86b68g7714k      1314604389 14-APR-19 01.00.44.945 PM                1            571.158        571.158             0                  0                      0                      0               23245                  0                    681
     5442820596869317879 9u1a335s936z9      1314604389 14-APR-19 01.00.44.945 PM                1            536.886        536.886             0                  0                      0                      0               21851                  0                      2
     5442820596869317879 a922w6t6nt6ry      1314604389 14-APR-19 01.00.44.945 PM                1            607.943        607.943             0                  0                      0                      0               25948                  0                   1914
     5442820596869317879 d5cca46bzhdk3      1314604389 14-APR-19 01.00.44.945 PM                1            606.268         598.11             0                  0                      0                      0               25848                  0                   1763
     5442820596869317879 gwv75p0fyf9ys      1314604389 14-APR-19 01.00.44.945 PM                1            598.806        598.393             0                  0                      0                      0               24981                  0                   1525
     5442820596869317879 0u2rzwd08859s         3334601 15-APR-19 09.00.53.913 AM                1          18534.037      18531.635             0                  0                      0                      0              713757                  0                     59
     5442820596869317879 1spgv2h2sb8n5         3334601 15-APR-19 09.00.53.913 AM                1          30627.533      30627.533          .546                  0                      0                      0             1022484                 27                    487
     5442820596869317879 252dsf173mvc4         3334601 15-APR-19 09.00.53.913 AM                1          47872.361      47869.859          .085                  0                      0                      0             1457614                  2                    476
     5442820596869317879 25bw3269yx938         3334601 15-APR-19 09.00.53.913 AM                1         107915.183     107912.459         1.114                  0                      0                      0             2996363                 26                   2442
     5442820596869317879 2ktg1dvz8rndw         3334601 15-APR-19 09.00.53.913 AM                1          62178.512      62178.512          .077                  0                      0                      0             1789536                  3                   1111
     5442820596869317879 4500kk2dtkadn         3334601 15-APR-19 09.00.53.913 AM                1         106586.665     106586.665         7.624                  0                      0                      0             2894719                 20                   1660
     5442820596869317879 4jmj30ym5rrum         3334601 15-APR-19 09.00.53.913 AM                1          17638.067      17638.067             0                  0                      0                      0              699273                  0                    102
     5442820596869317879 657tp4jd07qn2         3334601 15-APR-19 09.00.53.913 AM                1          118948.54      118890.57             0                  0                      0                      0             3257090                  0                   2515
     5442820596869317879 6gpwwnbmch1nq         3334601 15-APR-19 09.00.53.913 AM                0          48685.816      48685.816          .487                  0                      0                  1.111             1433923                 12                      0
     5442820596869317879 6k1q5byga902a         3334601 15-APR-19 09.00.53.913 AM                1            2144.59        2144.59             0                  0                      0                      0              307369                  0                      2

The first few lines show the good plan that these inserts ran on earlier runs. The good plan has PLAN_HASH_VALUE 1314604389 and runs in about 600 milliseconds. The bad plan has PLAN_HASH_VALUE 3334601 and runs in 100 or so seconds. I took a look at the plans before doing the SQL Profile but did not really dig into why the plans changed. It was 4:30 pm or so and I was trying to get out the door since I was not on call and wanted to get home at a normal time and leave the problems to the on-call DBA. Here is the good plan:

Plan hash value: 1314604389

------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                |                    |       |       |  3090 (100)|          |
|   1 |  HASH JOIN RIGHT SEMI           |                    |  2311 |  3511K|  3090   (1)| 00:00:13 |
|   2 |   VIEW                          | VW_SQ_1            |   967 | 44482 |  1652   (1)| 00:00:07 |
|   3 |    HASH JOIN                    |                    |   967 | 52218 |  1652   (1)| 00:00:07 |
|   4 |     TABLE ACCESS FULL           | PS_PST_VCHR_TAO4   |    90 |  1980 |    92   (3)| 00:00:01 |
|   5 |     NESTED LOOPS                |                    | 77352 |  2417K|  1557   (1)| 00:00:07 |
|   6 |      INDEX UNIQUE SCAN          | PS_BUS_UNIT_TBL_GL |     1 |     5 |     0   (0)|          |
|   7 |      TABLE ACCESS BY INDEX ROWID| PS_DIST_LINE_TMP4  | 77352 |  2039K|  1557   (1)| 00:00:07 |
|   8 |       INDEX RANGE SCAN          | PS_DIST_LINE_TMP4  | 77352 |       |   756   (1)| 00:00:04 |
|   9 |   TABLE ACCESS BY INDEX ROWID   | PS_VCHR_TEMP_LN4   | 99664 |   143M|  1434   (1)| 00:00:06 |
|  10 |    INDEX RANGE SCAN             | PSAVCHR_TEMP_LN4   | 99664 |       |   630   (1)| 00:00:03 |
------------------------------------------------------------------------------------------------------

Here is the bad plan:

Plan hash value: 3334601

---------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                   |                    |       |       |  1819 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID       | PS_VCHR_TEMP_LN4   |  2926 |  4314K|  1814   (1)| 00:00:08 |
|   2 |   NESTED LOOPS                     |                    |  2926 |  4446K|  1819   (1)| 00:00:08 |
|   3 |    VIEW                            | VW_SQ_1            |     1 |    46 |     4   (0)| 00:00:01 |
|   4 |     SORT UNIQUE                    |                    |     1 |    51 |            |          |
|   5 |      TABLE ACCESS BY INDEX ROWID   | PS_PST_VCHR_TAO4   |     1 |    23 |     1   (0)| 00:00:01 |
|   6 |       NESTED LOOPS                 |                    |     1 |    51 |     4   (0)| 00:00:01 |
|   7 |        NESTED LOOPS                |                    |     1 |    28 |     3   (0)| 00:00:01 |
|   8 |         INDEX UNIQUE SCAN          | PS_BUS_UNIT_TBL_GL |     1 |     5 |     0   (0)|          |
|   9 |         TABLE ACCESS BY INDEX ROWID| PS_DIST_LINE_TMP4  |     1 |    23 |     3   (0)| 00:00:01 |
|  10 |          INDEX RANGE SCAN          | PS_DIST_LINE_TMP4  |     1 |       |     2   (0)| 00:00:01 |
|  11 |        INDEX RANGE SCAN            | PS_PST_VCHR_TAO4   |     1 |       |     1   (0)| 00:00:01 |
|  12 |    INDEX RANGE SCAN                | PSAVCHR_TEMP_LN4   |   126K|       |  1010   (1)| 00:00:05 |
---------------------------------------------------------------------------------------------------------

Notice that in the bad plan the Rows column has 1 in it on many of the lines, but in the good plan it has larger numbers. Something about the statistics and the values in the where clause caused the optimizer to build the bad plan as if no rows would be accessed from these tables even though many rows would be accessed. So, it made a plan based on wrong information. But I had no time to dig further. I did ask my coworker if anything had changed about this job and nothing had.

So, I created a SQL Profile script by going to the utl subdirectory under sqlt where it was installed on the database server. I generated the script by running coe_xfr_sql_profile gwv75p0fyf9ys 1314604389. I edited the created script by the name coe_xfr_sql_profile_gwv75p0fyf9ys_1314604389.sql and changed the setting force_match=>FALSE to force_match=>TRUE and ran the script. The long running job finished shortly thereafter, and no new incidents have occurred in future runs.

The only thing that confuses me is that when I run fmsstat2.sql now with ss.FORCE_MATCHING_SIGNATURE = 5442820596869317879 I do not see any runs with the good plan. Maybe future runs of the job have a different FORCE_MATCHING_SIGNATURE and the SQL Profile only helped the one job. If that is true, the future runs may have had the correct statistics and run the good plan on their own.

I wanted to post this to give an example of using force_match=>TRUE with coe_xfr_sql_profile. I had an earlier post about this subject, but I thought another example could not hurt. I also wanted to show how I use fmsstat2.sql to find multiple SQL statements by their FORCE_MATCHING_SIGNATURE value. I realize that SQL Profiles are a kind of band aid rather than a solution to the real problem. But I got out of the door by 5 pm on Monday and did not get woken up in the middle of the night so sometimes a quick fix is what you need.

Bobby

Categories: DBA Blogs

Developers Decide One Cloud Isn’t Enough

OTN TechBlog - Wed, 2019-04-17 08:00

Introduction

Developers have significantly greater choice today than even just a few years ago, when considering where to build, test and host their services and applications, deciding which clouds to move existing on-premises workloads to, and which of the multitude of open source projects to leverage. So why, in this new era of empowered developers and expanding choice, have so many organizations pursued a single cloud strategy?  The proliferation of new, cloud native open source projects and cloud service providers over recent years who have added capacity, functionality, tools, resources and services, has resulted in better performance, different cost models, and more choice for developers and DevOps engineers, while increasing competition among providers. This is leading into a new era of cloud choice, where the new norm will be dominated by a multi-cloud and hybrid cloud model.

As new cloud native design and development technologies like Kubernetes, serverless computing, and the maturing discipline of microservices emerge, they help accelerate, simplify, and expand deployment and development options. Users have the ability to leverage new technologies with their existing designs and deployments, and the flexibility they afford expands users’ option to run on many different platforms. Given this rapidly changing cloud landscape, it is not surprising that hybrid cloud and multi cloud strategies are being adopted by an increasing number of companies today. 

For a deeper dive into Prediction #7 of the 10 Predictions for Developers in 2019 offered by Siddhartha Agarwal, “Developers Decide One Cloud Isn’t Enough”, we look at the growing trend for companies and developers to choose more than one cloud provider. We’ll examine a few of the factors they consider, the needs determined by a company’s place in the development cycle, business objectives, and level of risk tolerance, and predict how certain choices will trend in 2019 and beyond.

 

Different Strokes

We are in a heterogeneous IT world today. A plethora of choice and use cases, coupled with widely varying technical and business needs and approaches to solving them, give rise to different solutions. No two are exactly the same, but development projects today typically fall within the following scenarios.

A. Born in the cloud development – these suffer little to no constraint imposed by existing applications; it is highly efficient and cost-effective to begin design in the cloud. They are naturally leveraging containers and new open source development tools like serverless (https://fnproject.io/) or service mesh platforms (e.g., Istio)  A decade ago, startup costs based on datacenter needs alone were a serious barrier to entry for budding tech companies – cloud computing has completely changed this.

B. On premises development moving to cloud – enterprises in this category have many more factors to consider. Java teams for example are rapidly adopting frameworks like Helidon and GraalVM to help them move to a microservice architecture and migrate applications to the cloud. But will greenfield development projects start only in cloud? Do they migrate legacy workloads to cloud? How do they balance existing investments with new opportunities? And what about the interface between on-premises and cloud?

C. Remaining mostly on premises but moving some services to cloud – options are expanding for those in this category. A hybrid cloud approach has been expanding, and we predict will continue to expand, over the course of at least the next few years.  The cloud native stacks available on premises now mirror the cloud native stacks in the cloud thus enabling a new generation of hybrid cloud use cases. An integrated and supported cloud native framework that spans on premises and cloud options delivers choice once again.  And, security, privacy and latency concerns will dictate some of their unique development project needs.

 

If It Ain’t Broke, Don’t Fix It?

IT investments are real. Inertia can be hard to overcome. Let’s look at the main reasons for not distributing workloads across multiple clouds.  

  • Economy of scale tops the list, as most cloud providers will offer discounts for customers who go all in; larger workloads on one cloud provide negotiating leverage.
  • Development staff familiarity with one chosen platform makes it easier to bring on and train new developers; ramp time to productivity increases.
  • Custom features or functionality unique to the main cloud provider may need to be removed or redesigned in moving to another platform. Even on supposedly open platforms, developers must be aware of the not-so-obvious features impacting portability.
  • Geographical location of datacenters for privacy and/or latency concerns in less well-served areas of the world may also inhibit choice, or force uncomfortable trade-offs.
  • Risk mitigation is another significant factor, as enterprises seek to balance conflicting business needs with associated risks. Lean development teams often need to choose between taking on new development work vs modernizing legacy applications, when resources are scarce.

Change is Gonna Do You Good

These are valid concerns, but as dev teams look more deeply into the robust services and offerings emerging today, the trend is to diversify.

The most frequently cited concern is that of vendor lock-in. This counter-argument to that of economy of scale says that the more difficult it is to move your workloads off of one provider, the less motivated that vendor is to help reduce your cost of operations. For SMBs (small to mid-sized businesses) without a ton of leverage in comparison to large enterprises, this can be significant. Ensuring portability of workloads is important. A comprehensive cloud native infrastructure is imperative here – one that includes container orchestration but also streaming, CI/CD, and observability and analysis (e.g, Prometheus and Grafana). Containers and Kubernetes deliver portability, provided your cloud vendor uses unmodified open source code. In this model, a developer can develop their web application on their laptop, push it into a CI/CD system on one cloud, and leverage another cloud for managed Kubernetes to run their container-based app. However, the minute you start using specific APIs from the underlying platform, moving to another platform is much more difficult. AWS Lambda is one of many examples.

Mergers, acquisitions, changing business plans or practices, or other unforeseen events may impact a business at a time when they are not equipped to deal with it. Having greater flexibility to move with changing circumstances, and not being rushed into decisions, is also important. Consider for example, the merger of an organization that uses an on-premises PaaS, such as OpenShift, merging with another organization that has leveraged the public cloud across IaaS, PaaS and SaaS. It’s important to choose interoperable technologies to anticipate these scenarios.

Availability is another reason cited by customers. A thoughtfully designed multi-cloud architecture not only offers potential negotiating power as mentioned above, but also allows for failover in case of outages, DDoS attacks, local catastrophes, and the like. Larger cloud providers with massive resources and proliferation of datacenters and multiple availability domains offer a clear advantage here, but it also behooves the consumer to distribute risk across not only datacenters, but over several providers.

Another important set of factors is related to cost and ROI. Running the same workload on multiple cloud providers to compare cost and performance can help achieve business goals, and also help inform design practices.  

Adopting open source technologies enables businesses to choose where to run their applications based on the criteria they deem most important, be they technical, cost, business, compliance, or regulatory concerns. Moving to open source thus opens up the possibility to run applications on any cloud. That is, any CNCF-certified Kubernetes managed cloud service can safely run Kubernetes – so enterprises can take advantage of this key benefit to drive a multi-cloud strategy.

The trend in 2019 is moving strongly in the direction of design practices that support all aspects of a business’s goals, with the best offers, pricing and practices from multiple providers. This direction makes enterprises more competitive – maximally productive, cost-effective, secure, available, and flexible regarding platform choice.

 

Design for Flexibility

Though having a multi-cloud strategy seems to be the growing trend, it does come with some inherent challenges. To address issues like interoperability among multiple providers and establishing depth of expertise with a single cloud provider, we’re seeing an increased use of different technologies that help to abstract away some of the infrastructure interoperability hiccups. This is particularly important to developers, who seek the best available technologies that fit their specific needs.

Serverless computing seeks to reduce the awareness of any notion of infrastructure. Consider it similar to water or electricity utilities – once you have attached your own minimal home infrastructure to the endpoint offered by the utility, you simply turn on the tap or light switch, and pay for what you consume. The service scales automatically – for all intents and purposes, you may consume as much output of the utility or service as desired, and the bill goes up and down accordingly. When you are not consuming the service, there is no (or almost no) overhead.  

Development teams are picking cloud vendors based on capabilities they need. This is especially true in SaaS. SaaS is a cloud-based software delivery model with payment based on usage, rather than license or support-based pricing. The SaaS provider develops, maintains and updates the software, along with the hardware, middleware, application software, and security. SaaS customers can more easily predict total cost of ownership with greater accuracy. The more modern, complete SaaS solutions also allow for greater ease of configuration and personalization, and offer embedded analytics, data portability, cloud security, support for emerging technologies, and connected, end-to-end business processes.

Serverless computing not only provides simplicity through abstraction of infrastructure, its design patterns also promote the use of third-party managed services whenever possible. This provides flexibility and allows you to choose the best solution for your problem from the growing suite of products and services available in the cloud, from software-defined networking and API gateways, to databases and managed streaming services. In this design paradigm, everything within an application that is not purely business logic can be efficiently outsourced.

More and more companies are finding it increasingly easy to connect elements together with Serverless functionality for the desired business logic and design goals. Serverless deployments talking to multiple endpoints can run almost anywhere; serverless becomes the “glue” that is used to make use of the best services available, from any provider.

Serverless deployments can be run anywhere, even on multiple cloud platforms. Hence flexibility of choice expands even further, making it arguably the best design option for those desiring portability and openness.

 

Summary

There are many pieces required to deliver a successful multi-cloud approach. Modern developers use specific criteria to validate if a particular cloud is “open” and whether or not it supports a multi-cloud approach. Does it have the ability to

  • extract/export data without incurring significant expense or overhead?
  • be deployed either on-premises or in the public cloud, including for custom applications, integrations between applications, etc.?
  • monitor and manage applications that might reside on-premises or in other clouds from a single console, with the ability to aggregate monitoring/management data?

And does it have a good set of APIs that enables access to everything in the UI via an API? Does it expose all the business logic and data required by the application? Does it have SSO capability across applications?

The CNCF (Cloud Native Computing Foundation) has over 400 cloud provider, user, and supporter members, and its working groups and cloud events specification engage these and thousands more in the ongoing mission to make cloud native computing ubiquitous, and allow engineers to make high-impact changes frequently and predictably with minimal toil.

We predict this trend will continue well beyond 2019 as CNCF drives adoption of this paradigm by fostering and sustaining an ecosystem of open source, vendor-neutral projects, and democratizing state-of-the-art patterns to make these innovations accessible for everyone.

Oracle is a platinum member of CNCF, along with 17 other major cloud providers. We are serious about our commitment to open source, open development practices, and sharing our expertise via technical tutorials, talks at meetups and conferences, and helping businesses succeed. Learn more and engage with us at cloudnative.oracle.com, and we’d love to hear if you agree with the predictions expressed in this post. 

Leading Pharmacy Extends 100 Year Legacy with Oracle

Oracle Press Releases - Wed, 2019-04-17 07:00
Press Release
Leading Pharmacy Extends 100 Year Legacy with Oracle Farmatodo expands operations in new countries with modern retail technology

REDWOOD SHORES, Calif. and CARACAS, Venezuela—Apr 17, 2019

Farmatodo, a leading Venezuelan self-service chain of pharmacies, has specialized in providing medicine, personal care, beauty and baby products to help consumers care for themselves and their families for more than 100 years. Through a seamless shopping experience, the company offers approximately 8000 products in more than 200 stores and online in Venezuela and Colombia. With Oracle Retail, Farmatodo has established a framework to expand into new countries, deploy new stores faster, and gained the agility to serve in-store shoppers better with a modern point of service (POS) system.

In addition, this new technology will support Farmatodo’s aggressive delivery model in Colombia. While the area is known to have challenging traffic congestion, the pharmacy offers home delivery in up to 30 minutes. To help fulfill this promise, having the real-time inventory visibility and store consistency Oracle provides is critical.

“The continuity and expansion of our retail operation depended on reducing technological risks and improving information integrity and business processes. We replaced outdated legacy systems with Oracle to create a foundation for growth in Latin America,” said Angelo Cirillo, chief information officer, Farmatodo. “Usually these projects take three years for only one country. Leveraging Oracle’s best practices and integrated solutions, we fast-tracked the implementation of two countries in two years.”

The company relies on Oracle Retail Merchandising System, Oracle Retail Store Inventory Management and Oracle Retail Warehouse Management System to manage the business at a corporate level and Oracle Retail Xstore Point-of-Service to enhance the consumer experience on the store floors.

Farmatodo selected Oracle PartnerNetwork (OPN) Platinum level member, Retail Consult to implement the latest versions of the solutions. A longtime collaborator, Retail Consult has a deep understanding of Oracle technology, retail process, and customers. The company employed a multifunctional team with a strong customer-centric approach, along with the Oracle Retail Reference Model, to chart a path to success for Farmatodo.

“To support international expansion, we faced a technological and corporate challenge. The previous experience with Oracle Retail system allowed us to fully evaluate and emulate features and functionalities before extending them throughout new and existing operations,” said Francisco Gerardo Díaz Parra, project director, Farmatodo. “The stability and data security provided by Oracle, combined with the highly skilled implementation partner and defined project governance brought us the optimal mix to integrate processes and modernize systems.”

“For thirty-plus years, we have been working hand-in-hand with global retailers to help ensure successful implementations and outcomes. The power of this combined knowledge continues to be central in delivering unmatched industry best practices and guiding innovations that are enabled by our modern platform. Our goal is to help our customers keep pace with the changes in consumer behavior and to enable them with operational agility and a clear view into their operations so they can move at the same speed,” said Mike Webster, senior vice president, and general manager, Oracle Retail. 

Contact Info
Kris Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Retail Consult

Retail Consult is a highly specialized group that has a big focus on technology solutions for retail, offering clients global perspective and experience with operations in Europe, North, South and Central America. The most senior resources average 15 years of retail experience, and the multilingual team integrates retail-specific skills in strategy, technology architecture, business process, change management, support, and management.

About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility and refine the customer experience. For more information, visit our website www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kris Reeves

  • +1.925.787.6744

Westchester Community College Uses Oracle Cloud to Modernizes Education Experience

Oracle Press Releases - Wed, 2019-04-17 07:00
Press Release
Westchester Community College Uses Oracle Cloud to Modernizes Education Experience Community college deploys Oracle Student Cloud to recruit and engage students across an expanding portfolio of learning programs

Redwood Shores, Calif.—Apr 17, 2019

Westchester Community College  is implementing Oracle Student Cloud solutions to support its goal of providing accessible, high-quality and affordable education to its diverse community. The two-year public college is affiliated with the State University of New York, the nation’s largest comprehensive public university system.

To keep pace with fast-changing workforce requirements and student expectations, institutions  such as Westchester Community College are evolving to improve student outcomes and operational efficiency. This change demands both a new model for teaching, learning and research, as well as better ways to recruit, engage and manage students throughout their lifelong learning experience.    

“We are committed to student success, academic excellence, and workforce and economic development. To deliver on those promises we needed to leverage the best technology to modernize our operations and how we engage with our students,” said Dr. Belinda Miles, president of Westchester Community College, Valhalla, N.Y. “By expanding our Oracle footprint  with Oracle Student Cloud we will be able to support a diverse array of academic programs and learning opportunitites including continuing education, while delivering better experiences to our students.”

Oracle Student Cloud solutions, including Student Management and Recruiting, will integrate seamlessly with Westchester’s existing Oracle Campus student information system. With Oracle Student Management, the school will be able to better inform existing and prospective students about classes and services, and Oracle Student Recruiting will improve and simplify the student recruitment process. The college will also be using Oracle Student Engagement to better communicate with and engage current and prospective students.

“Oracle Student Cloud enables organizations such as Westchester to promote an increasingly diverse array of academic programs for successful life-long learning.” said Vivian Wong, GVP higher education development, Oracle. “We are delighted to partner with Westchester on their cloud transformation journey.”

Supporting the entire student life cycle, Oracle Student Cloud is a complete suite of higher education cloud solutions, including Student Management, Student Recruiting, Student Engagement, and Student Financial Planning. As a set of modules, designed to work as a suite, institutions are able to choose their own incremental path to the cloud.     

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Kristin Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.925.787.6744

Podcast: On the Highway to Helidon

OTN TechBlog - Tue, 2019-04-16 23:00

Are you familiar with Project Helidon? It’s an open source Java microservices framework introduced by Oracle in September of 2018.  As Helidon project lead Dmitry Kornilov explains in his article Helidon Takes Flight, "It’s possible to build microservices using Java EE, but it’s better to have a framework designed from the ground up for building microservices."

Helidon consists of a lightweight set of libraries that require no application server and can be used in Java SE applications. While these libraries can be used separately, using them in combination provides developers with a solid foundation on which to build microservices.

In this program we’ll dig into Project Helidon with a panel that consists of two people who are actively engaged in the project, and two community leaders who have used Helidon in development projects, and have also organized Helidon-focused Meet-Ups.

This program was recorded on Friday, March 8, 2019. So let’s journey through time and space and get to the conversation. Just press play in the widget.

The Panelists Dmitry Kornilov

Dmitry Kornilov
Senior Software Development Manager, Oracle; Project Lead, Project Helidon
Prague, Czech Republic

 

Tomas Langer

Tomas Langer
Consulting Member of Technical Staff, Oracle; Member of the Project Helidon Team
Prague, Czech Republic

 

Oracle ACE Associate José Rodrigues

José Rodrigues
Principal Consultant and Business Analyst, Link Consulting; Co-Organizer, Oracle Developer Meetup Lisbon
Lisbon, Portugal

 

Oracle ACE Phil Wilkins

Phil Wilkins
Senior Consultant, Capgemini; Co-Organizer. Oracle Developer Meetup London
Reading, UK

 

Relevant Resources

Help with v$statname and v$sysstat

Tom Kyte - Tue, 2019-04-16 19:06
Tom, Can you please provide info on how can I find the full table scan and index table scan activities in the database using v$statname and v$sysstat? Do I need to set TIMED_STATISTICS=TRUE before running queries against v$sysstat?...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator