For more information about python logging, go here: https://docs.python.org/3/howto/logging.html
We'll start by adding some config options for logging in our config file:
[common]
[common]
consumer_key = **your key here**
consumer_secret = **your key here**
access_token = **your key here**
access_token_secret = **your key here**
# accepted logging levels are: DEBUG, INFO, WARNING, ERROR, CRITICAL
logging_level = DEBUG
logging_path = Logs
[env-qa]
twitter_api_base_url = https://api.twitter.com
- Logging levels are defined in the python Logger module. The link is above
- For now the logging path is relative to the current directory. You could of course change that in your implementation.
Lets make the necessary changes to Config.py:
import configparser
import os
import sys
class Config:
LOGGER_ID = "API SAMPLE TESTS"
CONFIG_DIR = 'ConfigFiles'
COMMON_CONFIG_SECTION = 'common'
ENV_CONFIG_SECTION = 'env-qa'
LOG_FILE_NAME = "testlog.log"
config_file_name = None
consumer_key = None
consumer_secret = None
access_token = None
access_token_secret = None
twitter_api_base_url = None
logging_level = None
logging_path = None
logger = None
# Warning! The code to set up logging requires this config class. Do not do any logging in __init__
def __init__(self, config_file_name):
try_path = os.path.join(self.current_dir, config_file_name)
if os.path.exists(try_path):
self.config_file_name = try_path
else:
self.config_file_name = os.path.join(self.current_dir, self.CONFIG_DIR, config_file_name)
if not os.path.exists(self.config_file_name):
print("\n\nFatal Error: Config file: {} not found.".format(self.config_file_name))
sys.exit()
# Warning! The code to set up logging requires this config class. Do not do any logging in read_config
def read_config(self):
config = configparser.ConfigParser()
config.read(self.config_file_name)
self.consumer_key = config[self.COMMON_CONFIG_SECTION]['consumer_key']
self.consumer_secret = config[self.COMMON_CONFIG_SECTION]['consumer_secret']
self.access_token = config[self.COMMON_CONFIG_SECTION]['access_token']
self.access_token_secret = config[self.COMMON_CONFIG_SECTION]['access_token_secret']
self.logging_level = config[self.COMMON_CONFIG_SECTION]['logging_level']
self.logging_path = config[self.COMMON_CONFIG_SECTION]['logging_path']
self.twitter_api_base_url = config[self.ENV_CONFIG_SECTION]['twitter_api_base_url']
print("Twitter api base url = {}".format(self.twitter_api_base_url))
@property
def current_dir(self):
return os.path.dirname(os.path.abspath(__file__))
You will notice I have a couple of warnings in here about dependencies. I'm not a big fan of creating code that could cause an endless loop, but in this case either that could happen, or our logging just won't be set up. So I think it's just best for full disclosure. I have in the past had a "log_config" method to print out config settings. This can be handy in determining what happened.
Now it's time to setup our logging. I have found that that the "conftest.py" file has a method that will run before anything else, so we'll put our code to initialize logging there. Also the bonus of this is that we'll change our instance of config to use the one in this file. It's sort of python trickery, but anything in this file will exist for the duration of the test run.
Now it's time to setup our logging. I have found that that the "conftest.py" file has a method that will run before anything else, so we'll put our code to initialize logging there. Also the bonus of this is that we'll change our instance of config to use the one in this file. It's sort of python trickery, but anything in this file will exist for the duration of the test run.
import logging
from logging import handlers
from os import path
from Config import Config
MAX_LOG_SIZE = 999999
_config = None
_config_file_name = None
def pytest_addoption(parser):
parser.addoption("--config", action="store", default='local',
help="specify the config file to use for tests")
def pytest_configure(config):
get_config(config)
setup_logging()
def setup_logging():
log_file_path = path.join(_config.current_dir, _config.logging_path, _config.LOG_FILE_NAME)
# create logger with logger id in Config
logger = logging.getLogger(Config.LOGGER_ID)
logger.setLevel(_config.logging_level)
# create file handler which logs even debug messages
fh = logging.handlers.RotatingFileHandler(log_file_path, maxBytes=50000, backupCount=5)
fh.setLevel(_config.logging_level)
fh.doRollover()
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(_config.logging_level)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s %(levelname)s - %(message)s')
fh.setFormatter(formatter)
ch.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)
logger.addHandler(ch)
print()
logger.info("TheTestFramework Tests starting...")
def get_config(pytest_config):
global _config, _config_file_name
if _config is None:
_config_file_name = pytest_config.getoption('config')
print("\nconfig file is {}".format(_config_file_name))
_config = Config(_config_file_name)
_config.read_config()
return _config
So you see we set up 2 logging handlers, one for our log file, and one for stdout. Also because of this we need to get our config file when the test starts up, so we add a method to do that.
The pytest_configure method name is a special one to pytest. Anything in that method will get called before the tests run.
Adding logging allows us to also log the output of our api calls. The best place to do that is in WebServiceBase. I only log out web service call output if our logging level is set to DEBUG, which whenever I run the tests on my local machine. If they run from the CI system the logging level is set to "INFO", so our log files stay a manageable size.
You will also notice that our constructor is getting pretty long. We'll take care of that later. What we need is a data object for our auth tokens.
Last but not least, I decided to add some helper functions to it's own file.
Finally we can reflect all these changes in our test suite:
The pytest_configure method name is a special one to pytest. Anything in that method will get called before the tests run.
Adding logging allows us to also log the output of our api calls. The best place to do that is in WebServiceBase. I only log out web service call output if our logging level is set to DEBUG, which whenever I run the tests on my local machine. If they run from the CI system the logging level is set to "INFO", so our log files stay a manageable size.
import json
import logging
from enum import Enum
import httplib2
import oauth2
from WebServicesClients.WebServiceResponse import WebServiceResponse
class HttpMethod(Enum):
GET = 1
PUT = 2
POST = 3
DELETE = 4
class WebServiceBase:
consumer_key = None
consumer_secret = None
access_token = None
access_token_secret = None
logger = None
def __init__(self, base_url, consumer_key, consumer_secret, access_token, access_token_secret, logger_id):
self.consumer_key = consumer_key
self.consumer_secret = consumer_secret
self.access_token = access_token
self.access_token_secret = access_token_secret
self.logger = logging.getLogger(logger_id)
# lets do some funky python stuff to add the base url to all our urls
url_vars = [attr for attr in dir(self) if not callable(attr) and attr.endswith("_URL")]
for var in url_vars:
self.__dict__[var] = base_url + eval('self.'+var)
def _get(self, url, headers=None):
self.logger.debug("GET: {}".format(url))
return self._service_call_impl(url, HttpMethod.GET, "", headers)
def _put(self, url, payload, headers=None):
self.logger.debug("PUT: {}".format(url))
return self._service_call_impl(url, HttpMethod.PUT, payload, headers)
def _post(self, url, payload, headers=None):
self.logger.debug("POST: {}".format(url))
return self._service_call_impl(url, HttpMethod.POST, payload, headers)
def _delete(self, url, headers=None):
self.logger.debug("DELETE: {}".format(url))
return self._service_call_impl(url, HttpMethod.DELETE, "", headers)
def _service_call_impl(self, url, http_method, payload, headers=None):
try:
if payload:
self.logger.debug("Payload: {}".format(payload))
consumer = oauth2.Consumer(key=self.consumer_key, secret=self.consumer_secret)
token = oauth2.Token(key=self.access_token, secret=self.access_token_secret)
client = oauth2.Client(consumer, token)
resp, content = client.request( url, method=http_method.name, body=payload, headers=headers)
except httplib2.HttpLib2Error as e:
service_response = WebServiceResponse(0, "", str(e))
return service_response
service_response = WebServiceResponse(resp['status'], self.loads_json(content), content)
self.logger.debug("Response Code: {}".format(service_response.response_code))
self.logger.debug("Response json: {}".format(content))
return service_response
def loads_json(self, myjson):
try:
return json.loads(myjson.decode("utf-8"))
except ValueError:
return None
You will notice in our constructor an extra parameter for logger_id. This is how WebServiceBase knows gets the logger ID. I suppose I could have WebServiceBase to get a config object, but I'm trying to keep anything that uses config on the test file level. I'm not including the contents of the TwitterStatusesService.py, but it too needs to be modified to pass in the logger_id.You will also notice that our constructor is getting pretty long. We'll take care of that later. What we need is a data object for our auth tokens.
Last but not least, I decided to add some helper functions to it's own file.
# Test helpers contains basic methods that are useful for test assertions, and data creation.
# the methods here are generally static.
import random
import string
def verify_http_response(response, expected_return_code, message, expect_empty_json=False):
assert response.response_code == str(expected_return_code), message
if not expect_empty_json:
assert response.response_json is not None, "Expected response payload. Got none. :{}".format(message)
def make_random_string(num_chars):
return ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(num_chars))
The main star in TestHelpers is verify_http_response. In this method we have asserts to verify the http response code, and also check the returned json payload to see if it's empty or not. This method will generally get called in every test case to verify our http responses in our tests.Finally we can reflect all these changes in our test suite:
import logging
import urllib
from urllib.parse import urlencode
import pytest
import conftest
from Config import Config
from WebServicesClients.TwitterStatusesService import TwitterStatusesService
from Helpers import TestHelpers
class TestTwitterCRUD:
config = None
logger = None
twitter_service = None
def setup_class(self):
self.config = conftest.get_config(pytest.config)
self.logger = logging.getLogger(Config.LOGGER_ID)
self.twitter_service = TwitterStatusesService(self.config.twitter_api_base_url,
self.config.consumer_key,
self.config.consumer_secret,
self.config.access_token,
self.config.access_token_secret,
Config.LOGGER_ID)
print()
self.logger.info("Running TestTwitterCRUD Tests")
def test_get_timeline(self):
self.logger.info("Get timeline test")
response = self.twitter_service.get_home_timeline()
TestHelpers.verify_http_response(response, 200, "Get Twitter Account Timeline")
def test_post_to_timeline(self):
self.logger.info("Post to timeline test")
status = "Test Status {}".format(TestHelpers.make_random_string(6))
payload = "status={}".format(urllib.parse.quote(status))
response = self.twitter_service.post_tweet(payload)
TestHelpers.verify_http_response(response, 200, "Post to Timeline")
assert status in str(response.response_json)
Now you can see that our test setup is a little easier, as the functions to read the config file have been moved to conftest,py. We are also logging some informational messages about what test is running. All other logging occurs when the service calls are made. You can also see what a time saver using our "verify_http_response" method is.
A few other observations:
- As noted, our constructor for TwitterStatusesService has quite a few parameters. So we'll want to create a container class for the credentials to reduce that.
- test_get_timeline isn't really asserting that we get any data back. In a few more posts we'll be fleshing out test cases quite a bit more, and we'll fix that problem, as well as add an appropriate number of test cases to adequately test this system.
- When you run the tests you will see that information sent to the console by pytest is completely missing. This is because we setup logging for stuff we control, and we don't control pytest. We'll investigate ways to solve this problem.
Files for this blog post can be found at:
https://github.com/lightmanca/TheTestFrameworkBlog/tree/logging
No comments:
Post a Comment