KNOWLEDGE BASE


Results


Keyword
Text
Code for XML parsing

Certainly! If you have XML files structured like this:

```xml
<!-- file1.xml -->
<root>
<step>Step1</step>
<ytext>Value1</ytext>
<step>Step2</step>
<ytext>Value2</ytext>
</root>
```

```xml
<!-- file2.xml -->
<root>
<step>Step3</step>
<ytext>Value3</ytext>
<step>Step4</step>
<ytext>Value4</ytext>
</root>
```

You can modify the Python script to extract the values of "step" and "ytext" consecutively into a table. Here's an updated script:

```python
import os
import xml.etree.ElementTree as ET

def process_xml_file(file_path):
try:
tree = ET.parse(file_path)
root = tree.getroot()

steps = root.findall('.//step')
ytexts = root.findall('.//ytext')

if steps and ytexts:
print(f"File: {file_path}")
print("{:<15} {:<15} {:<15}".format("Step", "YText", "File"))
print("-" * 50)

for step, ytext in zip(steps, ytexts):
print("{:<15} {:<15} {:<15}".format(step.text, ytext.text, file_path))

print("\n")

except ET.ParseError as e:
print(f"Error parsing {file_path}: {e}")

def process_directory(directory_path):
for filename in os.listdir(directory_path):
if filename.endswith(".xml"):
file_path = os.path.join(directory_path, filename)
process_xml_file(file_path)

# Replace 'xml_files' with the actual directory path
process_directory('xml_files')
```

This script assumes that the "step" and "ytext" tags occur in pairs in the XML files. It then prints a table with columns "Step," "YText," and "File," indicating the values of "step" and "ytext" along with the corresponding file names. Adjust the directory path in the script according to your actual XML files.

AI Links

ChatSonic AI - https://writesonic.com/chat
Durable AI - https://durable.co/
Eightify AI - https://eightify.app/
Perplexity AI - https://www.perplexity.ai/
Runway ML AI - https://runwayml.com/
Amazon Code Whisperer AI - https://aws.amazon.com/codewhisperer/
Caktus AI - https://www.caktus.ai/

netstat network IP

tracert -d debug....net
nslookup authoring...com

py python list lists add create

# Create Empty list
a = list()
a = []
# Create a data list
b = ['a','b','c','d']

# Add
a.append('f')

a.insert(0,'e')

b.extend(a)

# Remove
b.pop() # removes last element
b.pop(3) # removes items at location 3 i.e 'd'
b.remove('e') # removes the specific item from list

git commands

<b> git clone <url> </b> : take a repository stored on a server (like GitHub) and downloads it
git add <filename(s)> : add files to the staging area to be included in the next commit
git commit -m "message" : take a snapshot of the repository and save it with a message about the changes
git commit -am <filename(s)> "message" : add files and commit changes all in one
git status : print what is currently going on with the repository
git push : push any local changes (commits) to a remote server
git pull : pull any remote changes from a remote server to a local computer
git log : print a history of all the commits that have been made
git reflog : print a list of all the different references to commits

git reset --hard <commit> : reset the repository to a given commit
git reset --hard origin/master : reset the repository to its original state (e.g. the version cloned from GitHub)

git branch : list all the branches currently in a repository
git branch <name> : create a new branch called name
git checkout <name> : switch current working branch to name
git push --set-upstream origin branch2
git merge <name> : merge branch name into current working branch (normally master)
git fetch : download all of the latest commits from a remote to a local device
git merge origin/master : merge origin/master, which is the remote version of a repository normally downloaded with git fetch, into the local, preexesiting master branch
Note that git pull is equivalent to running git fetch and then git merge origin/master

py request api

from bs4 import BeautifulSoup
import requests, re
import pandas as pd

r = requests.get(url)
if r.status_code != requests.codes.ok:
print("FAILUE FOR " , url , " STATUS ", str(r.status_code))
continue

html = r.text
soup = BeautifulSoup(html, 'lxml')
all_links = soup.find_all("a")

for link in all_links:
href = link.get("href")
if(type(href) != NoneType):
if (href.startswith('/') and len(href) > 1):
linktxt = link.get_text().strip()

postgres db

pg_dump -f portfoliodb.dump portfoliodb -Fc

pg_restore -h rdspgdb.cnhtgoim0dl9.us-east-2.rds.amazonaws.com -p 5432 -U danny82 -d portfoliodb portfoliodb.dump

pg_restore -h 18.191.180.153 -p 5432 -U postgres -d portfoliodb portfoliodb.dump

create a model

g) Create a model

1) Add details into models.py file
2) and then to get them into database you have to

python manage.py makemigrations

python manage.py migrate

3) If you want to use these through Admin then you need to register that in admin.py

from .models import Job
admin.site.register(Job)

create template view html

10) Create view to refer to this template

def home(request):
return (render(request,'acc/home.html'))

9) URLS.py file do the following to add a URL

import productapp.views as productviews

urlpatterns = [
path('',productviews.home ,name='home'),
]

OR

from django.urls import include
urlpatterns = [
path('acc/',include('accountsapp.urls')),
]

For the later ensure that you have urls.py present in the apps directory like below

from django.urls import path
from .views import *

urlpatterns = [
path('',home ,name='acchome'),

]

create new app

6) Move to creating an App
python manage.py startapp productapp

7) Update Settings to have the apps available

INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'accountsapp.apps.AccountsappConfig',
'productapp.apps.ProductappConfig',
]