My Experiences with Databases

Oracle,MySQL,SQL SERVER,Python,Azure,AWS,Oracle Cloud,GCP Etc

  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Total Views

    • 315,905 hits
  • $riram $anka


    The experiences, Test cases, views, and opinions etc expressed in this website are my own and does not reflect the views or opinions of my employer. This site is independent of and does not represent Oracle Corporation in any way. Oracle does not officially sponsor, approve, or endorse this site or its content.Product and company names mentioned in this website may be the trademarks of their respective owners.

Archive for the ‘Python’ Category

Python way to Download all the ASKTOM and Oracle MAG Posted by Connor McDonald at Linked In Group

Posted by Sriram Sanka on November 8, 2022


There is a Group Post By Connor on LinkedIn in Oracle Senior DBA Group, showing the links to access ASKTOM Best Posts and Oracle Magazines from https://asktom.oracle.com/pls/apex/f?p=100:9

Here is the Code Snippet that helps you to download all the Posts and Magazines as HTML files as your choice of Destination in your local file system .

Snippet To Download TOM KYTE Posts

import requests
from bs4 import BeautifulSoup
import string
import os
import urllib.request, urllib.error, urllib.parse
import sys

def Download_ASKTOM_files(path,url,enc,title):
    try:                
        response = urllib.request.urlopen(url)
        webContent = response.read().decode(enc)
        os.makedirs(path+'\\'+ 'ASKTOM', exist_ok=True)
        n=os.path.join(path+'\\'+ 'ASKTOM',title +'.html')
        f = open(n, 'w',encoding=enc)
        f.write(webContent)
        f.close
    except:
        n1=os.path.join(path+'\\'+  'ASKTOM_'+'Download_Error.log')
        f1 = open(n1, 'w',encoding=enc) 
        f1.write(url)
        f1.close
reqs = requests.get("https://asktom.oracle.com/tomkyte-blog.htm")
soup = BeautifulSoup(reqs.text, 'html.parser')
for link2 in soup.select(" a[href]"):
    src=link2["href"]
    durl='https://asktom.oracle.com/'+src
    tit =link2.get_text().replace(string.punctuation, " ").translate(str.maketrans('', '', string.punctuation))
    print(tit.replace(" ","_"),durl)
    Download_ASKTOM_files("c:\\Users\\cloudio\\Downloads\\blogs\\",durl,'UTF-8',tit.replace(" ","_"))        

Snippet to Download Magazines

import requests
from bs4 import BeautifulSoup
import string
import os
import urllib.request, urllib.error, urllib.parse
import sys

def Download_ASKTOM_files(path,url,enc,title):
    try:                
        response = urllib.request.urlopen(url)
        webContent = response.read().decode(enc)
        os.makedirs(path+'\\'+ 'ASKTOM_MAG', exist_ok=True)
        n=os.path.join(path+'\\'+ 'ASKTOM_MAG',title +'.html')
        f = open(n, 'w',encoding=enc)
        f.write(webContent)
        f.close
    except:
        n1=os.path.join(path+'\\'+  'ASKTOM_MAG_'+'Download_Error.log')
        f1 = open(n1, 'w',encoding=enc) 
        f1.write(url)
        f1.close
reqs = requests.get("https://asktom.oracle.com/magazine-archive.htm")
soup = BeautifulSoup(reqs.text, 'html.parser')
for link2 in soup.select(" a[href]"):
    src=link2["href"]
    durl='https://asktom.oracle.com/'+src
    tit =link2.get_text().replace(string.punctuation, " ").translate(str.maketrans('', '', string.punctuation))
    print(tit.replace(" ","_"),durl)
    Download_ASKTOM_files("c:\\Users\\cloudio\\Downloads\\blogs\\",durl,'UTF-8',tit.replace(" ","_"))   

Hope you liked it 🙂

Posted in ASKTOM, CONNOR, Python, TOMKYTE | Tagged: , , , , | Leave a Comment »

Web-Scraping 🐍 – Part 2 – Download scripts from code.activestate.com with Python -Pagination

Posted by Sriram Sanka on October 22, 2022


In my Previous post, we tried to get the blog entries as a file into a directory using web-scraping., Now lets read a web Page Entries and save the links(and content 🙂 ) as files. One can extract the Content from a web Page by reading/validating the tags as needed. In this post we are going to observe the URL Pattern for Reading and downloading the files from code.activestate.com.

code.activestate.com is one of best source to learn Python. It has around 4K+ Scripts available. lets take a look at the source.

Lets Invoke the URL https://code.activestate.com/recipes/langs/python/ in the browser & Jupiter to get the source of the webpage.

We have around 4500+ Scripts from 230 Pages, when you navigate through Pages you can see the URL gets appended with Page id as “/?page=1” at the end.

import requests
from bs4 import BeautifulSoup
import string
url = 'https://code.activestate.com/recipes/langs/python/?page=1'
reqs = requests.get(url)<br>soup = BeautifulSoup(reqs.text, 'html.parser')
print(soup)

If you are not sure how to generate Python Sample Code ,Try with postman as below to get the code Snippet.

You can see the Pattern in the Output.

Take a look at the first link , It reads as https://code.activestate.com/recipes/580811-uno-text-based/?in=lang-python and the Download link reads as https://code.activestate.com/recipes/580811-uno-text-based/download/1/

To Read all the scripts from all the Pages, we can pass the Page number at the end using a simple for loop and we also need to replace /?in=lang-python with /download/1/ in the URL and Append https://code.activestate.com/ as a prefix to the resulted.

for x in range(1, 250, 1):
    try:
        reqs = requests.get("https://code.activestate.com/recipes/langs/python/?page="+str(x))
        soup = BeautifulSoup(reqs.text, 'html.parser')
        for link2 in soup.select(" a[href]"):
            if "lang-python" in link2["href"]:
                src=link2["href"].replace("/recipes","https://code.activestate.com/recipes").replace("/?in=lang-python","/download/1/")
                tit =link2.get_text().replace(string.punctuation, " ").translate(str.maketrans('', '', string.punctuation))
                print(tit.replace(" ","_"),src)
                Download_active_state_files("c:\\Users\\Dell\\Downloads\\blogs\\",src,'UTF-8',tit.replace(" ","_"))
    except:
        pass

here is the complete Code to download all the scripts as .py in the given Directory.

import requests
from bs4 import BeautifulSoup
import string
import os
import urllib.request, urllib.error, urllib.parse
import sys
 

def Download_active_state_files(path,url,enc,title):
    try:                
        response = urllib.request.urlopen(url)
        webContent = response.read().decode(enc)
        os.makedirs(path+'\\'+ 'Code_Active_state', exist_ok=True)
        n=os.path.join(path+'\\'+ 'Code_Active_state',title +'.py')
        f = open(n, 'w',encoding=enc)
        f.write(webContent)
        f.close
    except:
        n1=os.path.join(path+'\\'+  'Code_Active_state_'+'Download_Error.log')
        f1 = open(n1, 'w',encoding=enc) 
        f1.write(url)
        f1.close
for x in range(1, 250, 1):
    try:
        reqs = requests.get("https://code.activestate.com/recipes/langs/python/?page="+str(x))
        soup = BeautifulSoup(reqs.text, 'html.parser')
        for link2 in soup.select(" a[href]"):
            if "lang-python" in link2["href"]:
                src=link2["href"].replace("/recipes","https://code.activestate.com/recipes").replace("/?in=lang-python","/download/1/")
                tit =link2.get_text().replace(string.punctuation, " ").translate(str.maketrans('', '', string.punctuation))
                print(tit.replace(" ","_"),src)
                Download_active_state_files("c:\\Users\\Dell\\Downloads\\blogs\\",src,'UTF-8',tit.replace(" ","_"))
    except:
        pass

You can Compare the files downloaded with the Web Page version.

Hope you like it. 🙂

Posted in POSTMAN, Python, WebScraping | Leave a Comment »

How-to-Install-Python-with-Anaconda & Connect with Oracle

Posted by Sriram Sanka on October 4, 2022


You can also download Python Installer Executable from https://www.python.org/downloads/windows/

With the Help of CX_ORACLE, we can connect and Execute Oracle Commands .

<strong>import pandas as pd
import pandas.io.sql as psql
import cx_Oracle
import os
os.environ["NLS_LANG"] = "AMERICAN_AMERICA.AL32UTF8"

dsn_tns = cx_Oracle.makedsn('localhost', 1521, 'xe')
ora_conn = cx_Oracle.connect('sriram','sriram',dsn=dsn_tns)
df1 = psql.read_sql('SELECT * FROM dba_users ', con=ora_conn) 
#for v in df1['USERNAME']:
#    print(v)
print("Running :", df1)
ora_conn.close()</strong>

You can use getpass to hide prompted password at command prompt.

<strong>import pandas as pd
import pandas.io.sql as psql
import cx_Oracle
import getpass
import os
os.environ["NLS_LANG"] = "AMERICAN_AMERICA.AL32UTF8"
username = input("Enter User Name: ")
userpwd = getpass.getpass(prompt='Password: ', stream=None) 

dsn_tns = cx_Oracle.makedsn('localhost', 1521, 'xe')
ora_conn = cx_Oracle.connect(username,userpwd,dsn=dsn_tns)
df1 = psql.read_sql('SELECT username,account_status FROM dba_users ', con=ora_conn) 
print("Running :", df1)
ora_conn.close()
</strong>

We can use the plot by Installing matplotlib

<strong>import pandas as pd
import pandas.io.sql as psql
import cx_Oracle
import getpass
import os
import matplotlib.pyplot as plt
os.environ["NLS_LANG"] = "AMERICAN_AMERICA.AL32UTF8"
username = input("Enter User Name: ")
userpwd = getpass.getpass(prompt='Password: ', stream=None) 

dsn_tns = cx_Oracle.makedsn('localhost', 1521, 'xe')
ora_conn = cx_Oracle.connect(username,userpwd,dsn=dsn_tns)
df1 = psql.read_sql('SELECT count(*) cnt,account_status FROM dba_users group by account_status', con=ora_conn) 

print(df1)
df1.plot(x="ACCOUNT_STATUS",y=["CNT"])
plt.show()
ora_conn.close()</strong>

<strong>import pandas as pd
import pandas.io.sql as psql
import cx_Oracle
import getpass
import os
import matplotlib.pyplot as plt

os.environ["NLS_LANG"] = "AMERICAN_AMERICA.AL32UTF8"
username = input("Enter User Name: ")
userpwd = getpass.getpass(prompt='Password: ', stream=None) 

dsn_tns = cx_Oracle.makedsn('localhost', 1521, 'xe')
ora_conn = cx_Oracle.connect(username,userpwd,dsn=dsn_tns)
df1 = psql.read_sql('SELECT count(*) cnt,account_status FROM dba_users group by account_status', con=ora_conn) 
print(df1)
df1.plot.bar(x="ACCOUNT_STATUS",y=["CNT"],rot=0)
plt.show()
ora_conn.close()</strong>

Hope you like it !!!

Posted in Installation, Linux, Python, Windows | Tagged: | Leave a Comment »

Fun with Python – Create Web Traffic using selenium & Python.

Posted by Sriram Sanka on October 4, 2022


In my Previous Post, I tried to Download the Content from the Blogs and store it in the File System, This Increased my Google Search Engine Stats and Blog traffic as well.

As you can See Max views are from Canada & India. With this, I thought of writing a Python Program to create traffic to my blog by reading my posts (so far ) using Selenium and Secure VPN.As I am connected to Canada VPN, you can see the Views below, Before and after .

In General QA Performs the same for App Testing Automation using selenium web driver and JS etc, Here I am using Python. Lets see the Code Part.

import codecs
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.common.keys import Keys
import time
import os
import pandas as pd
import requests
from lxml import etree
import random
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager

options = Options()
options.add_argument("start-maximized")
options.add_argument('--headless')
options.add_argument('--disable-gpu')
options.add_argument('--ignore-certificate-errors-spki-list')
options.add_argument('--ignore-certificate-errors')

options.add_argument("--incognito")
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options)

def create_traffic(url):
        website_random_URL = url
        driver.get(url)
        time.sleep(5)
        height = int(driver.execute_script("return document.documentElement.scrollHeight"))
        driver.execute_script('window.scrollBy(0,10)')
        time.sleep(10)
        

    
    

main_sitemap = 'https://ramoradba.com/sitemap.xml'
xmlDict = []
r = requests.get(main_sitemap)
root = etree.fromstring(r.content)
print ("The number of sitemap tags are {0}".format(len(root)))
for sitemap in root:
    children = sitemap.getchildren()
    xmlDict.append({'url': children[0].text})
    with open('links23.txt', 'a') as f:
        f.write( f'\n{children[0].text}')
        

pd.DataFrame(xmlDict)        
col_name = ['url']
df_url = pd.read_csv("links23.txt", names=col_name)
for row in df_url.url:
    print(row)
    create_traffic(row)            

This Part is the Main Block, reading through my Blog post URL from the Links downloaded from the sitemap.

def create_traffic(url):
        website_random_URL = url
        driver.get(url)
        time.sleep(5)
        height = int(driver.execute_script("return document.documentElement.scrollHeight"))
        driver.execute_script('window.scrollBy(0,10)')
        time.sleep(10)

This Code Opens the URL in the Browser and scroll, The same can be configured using while loop forever by reading random posts from the Blog URL instead of all the Posts.

The More you execute the program, You will get more Traffic. Hope you like it.

Lets Connect to Ukraine, Kyiv and get the views from there.

Lets Execute and see the Progress…..

After Execution, Views from Ukraine have been Increased.

Follow me for more Interesting post in future , Twitter – TheRamOraDBA linkedin-ramoradba

Posted in Python, Selenium, Web-Traffic, WebScraping | Tagged: , , , | Leave a Comment »

Hacking – FATDBA.COM ¯\_(ツ)_/¯ 

Posted by Sriram Sanka on October 4, 2022


#Python #WebScraping

Just Kidding !!! Its not Hacking, this is known as WEB-SCRAPING using the Powerful Python.

What is web scraping?

Web scraping is the process of using bots to extract content and data from a website. Unlike screen scraping, which only copies pixels displayed onscreen, web scraping extracts underlying HTML code and, with it, data stored in a database.

You just need a Browser, Simple & a small Python Code to get the content from the Web. First Lets see the Parts of the Code and Verify.

Step 1 : Install & Load the Python Modules

import time
import os
import pandas as pd
import requests
from lxml import etree
import random
import urllib.request, urllib.error, urllib.parse
import urllib.parse
import sys
import urllib.request
import string

Step 2: Define function to get the Name of the Site/Blog to Make it as a Folder.

def get_host(url,delim):
    parsed_url = urllib.parse.urlparse(url)
    return(parsed_url.netloc.replace(delim, "_"))

Step 3: Define a Function to Get the Blog/Page Title

def findTitle(url,delim):
    webpage = urllib.request.urlopen(url).read()
    title = str(webpage).split('<title>')[1].split('</title>')[0]
    return title.replace(delim, "_").translate(str.maketrans('', '', string.punctuation))

Step 4: Define a Function to Generate a Unique string of a given length

def unq_str(len):
    N = len
    res = ''.join(random.choices(string.ascii_uppercase + string.digits, k=N))
    return(str(res))

Step 5: Write the Main Block to Download the Content from the Site/Blog

def Download_blog(path,url,enc):
    try:   
        response = urllib.request.urlopen(url)
        webContent = response.read().decode(enc)
        os.makedirs(path+'\\'+ str(get_host(url,".")), exist_ok=True)
        n=os.path.join(path+'\\'+ str(get_host(url,".")),findTitle(url," ") +'.html')
        f = open(n, 'w',encoding=enc)
        f.write(webContent)
        f.close
    except:
        n1=os.path.join(path+'\\'+  str(get_host(url,"."))+'Download_Error.log')
        f1 = open(n1, 'w',encoding=enc) 
        f1.write(url)
        f1.close

Step 6: Define Another Function to save the Blog posts into a file & Invoke the Main block to get the Blog Content.

def write_post_url_to_file(blog,path):        
    main_sitemap = blog+'/sitemap.xml'
    r = requests.get(main_sitemap)
    root = etree.fromstring(r.content)
    for sitemap in root:
        children = sitemap.getchildren()
        with open(str(path+'\\'+get_host(blog,".")) +'_blog_links.txt', 'a') as f:
            f.write( f'\n{children[0].text}')
    col_name = ['url']
    df_url = pd.read_csv(str(path+'\\'+get_host(blog,".")) +'_blog_links.txt', names=col_name)
    for row in df_url.url:
        print(row)
        Download_blog(path,row,'UTF-8')
        
write_post_url_to_file("https://fatdba.com","c:\\Users\\Dell\\Downloads\\blogs\\")


This will create a file with links and folder with blog name to store all the content/Posts Data.

Sample Output as follows

BOOM !!!

For more interesting posts you can follow me @ Twitter – TheRamOraDBA & linkedin-ramoradba

Posted in download_blogs, Linux, Python, WebScraping, Windows | Tagged: , , , | Leave a Comment »

Python Basics – Part 1

Posted by Sriram Sanka on September 17, 2022


Language Introduction

Python is a dynamic, interpreted (bytecode-compiled) language. There are no type declarations of variables, parameters, functions, or methods in source code. This makes the code short and flexible, and you lose the compile-time type checking of the source code. Python tracks the types of all values at runtime and flags code that does not make sense as it runs.

https://www.edureka.co/blog/introduction-to-python/

In the Below Sections I have attached couple of reference Documents and Practice Notes for your reference. To obtain the contents, Rename the file Extension from txt to “ipynb” , which can be accessed using Jupyter Or Anaconda etc.

String Split

Description

Split the string input_str = ‘Kumar_Ravi_003’ to the person’s second name, first name and unique customer code. In this example, second_name= ‘Kumar’, first_name= ‘Ravi’, customer_code = ‘003’.

input_str = input('data')
first_name = input_str[6:10]
second_name = input_str[0:5]
customer_code = input_str[-3:]
print(first_name)
print(second_name)
print(customer_code)

string -lstrip()

input_str = input('Enter Input : ')
final_str = input_str.lstrip()
print(final_str)

List is a collection which is ordered and changeable. Allows duplicate members.

Tuple is a collection which is ordered and unchangeable. Allows duplicate members.

Set is a collection which is unordered, unchangeable*, and unindexed. No duplicate members.

Dictionary is a collection which is ordered** and changeable. No duplicate members.

List to String

Description

Convert a list [‘Pythons syntax is easy to learn’, ‘Pythons syntax is very clear’] to a string using ‘&’. The sample output of this string will be:

Pythons syntax is easy to learn & Pythons syntax is very clear

Note that there is a space on both sides of ‘&’ (as usual in English sentences).

l =[]
l.append('Pythons syntax is easy to learn')
l.append(' Pythons syntax is very clear')
print('This is the List ',l)
input_str = l
string_1 = " & ".join(input_str)
print('This is Combined String ',string_1)

References

https://python-course.eu/advanced-python/lambda-filter-reduce-map.php

https://book.pythontips.com/en/latest/map_filter.html

https://python.swaroopch.com/functions.html

https://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/functions.html

https://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/

https://python-3-patterns-idioms-test.readthedocs.io/en/latest/Comprehensions.html

https://docs.python.org/3/tutorial/controlflow.html

https://docs.python.org/3/reference/compound_stmts.html

https://docs.python.org/3/tutorial/datastructures.html

https://docs.python.org/3/tutorial/datastructures.html

https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/

https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html

https://python.swaroopch.com/

https://docs.python-guide.org/intro/learning/

https://www.simplilearn.com/tutorials/python-tutorial

https://developers.google.com/edu/python/lists

https://developers.google.com/edu/python/introduction

Posted in Anaconda, Python | Tagged: , , | Leave a Comment »

Install Python Modules using PIP & Upgrading Pip Version

Posted by Sriram Sanka on June 19, 2022


You can Install Python Modules by running Pip command as follows

python -m pip install matplotlib

python.exe -m pip install --upgrade pip This will upgrade the Pip Version to the Latest.

Posted in Python | Tagged: , , | Leave a Comment »

How to Install Python with Anaconda

Posted by Sriram Sanka on April 12, 2020


Refer to the document for help with installing Anaconda successfully Installing+Python.1

You can Download the individual version from https://www.anaconda.com/products/individual

I prefer Spider for the Sample Code snippets.

Sample Code for Sudoku Submitted at https://www.geeksforgeeks.org/sudoku-backtracking-7/

Input :

Output:

# N is the size of the 2D matrix   N*N
N = 9
 
# A utility function to print grid
def printing(arr):
    for i in range(N):
        for j in range(N):
            print(arr[i][j], end = " ")
        print()
 
# Checks whether it will be
# legal to assign num to the
# given row, col
def isSafe(grid, row, col, num):
   
    # Check if we find the same num
    # in the similar row , we
    # return false
    for x in range(9):
        if grid[row][x] == num:
            return False
 
    # Check if we find the same num in
    # the similar column , we
    # return false
    for x in range(9):
        if grid[x][col] == num:
            return False
 
    # Check if we find the same num in
    # the particular 3*3 matrix,
    # we return false
    startRow = row - row % 3
    startCol = col - col % 3
    for i in range(3):
        for j in range(3):
            if grid[i + startRow][j + startCol] == num:
                return False
    return True
 
# Takes a partially filled-in grid and attempts
# to assign values to all unassigned locations in
# such a way to meet the requirements for
# Sudoku solution (non-duplication across rows,
# columns, and boxes) */
def solveSuduko(grid, row, col):
   
    # Check if we have reached the 8th
    # row and 9th column (0
    # indexed matrix) , we are
    # returning true to avoid
    # further backtracking
    if (row == N - 1 and col == N):
        return True
       
    # Check if column value  becomes 9 ,
    # we move to next row and
    # column start from 0
    if col == N:
        row += 1
        col = 0
 
    # Check if the current position of
    # the grid already contains
    # value >0, we iterate for next column
    if grid[row][col] > 0:
        return solveSuduko(grid, row, col + 1)
    for num in range(1, N + 1, 1):
       
        # Check if it is safe to place
        # the num (1-9)  in the
        # given row ,col  ->we
        # move to next column
        if isSafe(grid, row, col, num):
           
            # Assigning the num in
            # the current (row,col)
            # position of the grid
            # and assuming our assined
            # num in the position
            # is correct
            grid[row][col] = num
 
            # Checking for next possibility with next
            # column
            if solveSuduko(grid, row, col + 1):
                return True
 
        # Removing the assigned num ,
        # since our assumption
        # was wrong , and we go for
        # next assumption with
        # diff num value
        grid[row][col] = 0
    return False
 
# Driver Code
 
# 0 means unassigned cells
grid = [[3, 0, 6, 5, 0, 8, 4, 0, 0],
        [5, 2, 0, 0, 0, 0, 0, 0, 0],
        [0, 8, 7, 0, 0, 0, 0, 3, 1],
        [0, 0, 3, 0, 1, 0, 0, 8, 0],
        [9, 0, 0, 8, 6, 3, 0, 0, 5],
        [0, 5, 0, 0, 9, 0, 6, 0, 0],
        [1, 3, 0, 0, 0, 0, 2, 5, 0],
        [0, 0, 0, 0, 0, 0, 0, 7, 4],
        [0, 0, 5, 2, 0, 6, 3, 0, 0]]
 
if (solveSuduko(grid, 0, 0)):
    printing(grid)
else:
    print("no solution  exists ")
 
    # This code is contributed by sudhanshgupta2019a

Posted in Anaconda, Books, Installation, Python | Tagged: , | Leave a Comment »

 
Tales From A Lazy Fat DBA

Its all about Databases & their performance, troubleshooting & much more .... ¯\_(ツ)_/¯

Thinking Out Loud

Michael T. Dinh, Oracle DBA

Notes On Oracle

by Mehmet Eser

Oracle Diagnostician

Performance troubleshooting as exact science

deveshdba

get sum oracle stuffs

Data Warehousing with Oracle

Dani Schnider's Blog

ORASteps

Oracle DBA's Daily Work

DBAspaceblog.com

Welcome everyone!! The idea of this blog is to help the DBA in their daily tasks. Enjoy.

Anand's Data Stories

Learn. Share. Repeat.

Tanel Poder's blog: Core IT for geeks and pros

Oracle Performance Tuning, Troubleshooting, Internals

Yet Another OCM

Journey as an Oracle Certified Master

DBAtricksWorld.com

Sharing Knowledge is ultimate key to Gaining knowledge...

Neil Chandler's DB Blog

A resource for Database Professionals

DBA Kevlar

Tips, tricks, (and maybe a few rants) so more DBA's become bulletproof!

OraExpert Academy

Consulting and Training

%d bloggers like this: