The Intelligent Learning Space

Photo by Philippe Bout on Unsplash

So what do we mean by a learning space and how is an intelligent learning space different?

Though the main thrust of the Jisc Intelligent Campus project is looking at how we can extend learning analytics to include and incorporate physical data, there is also space to discuss peripheral and related issues to the work. One aspect of this is the development and design of learning spaces as well as the use of data gathered from the use of learning spaces.

Generally most learning spaces are static spaces designed to allow for particular kinds of learning. Some have an element of flexibility allowing for different kinds of learning activity.

Often the pedagogy is shoe-horned into the space that is available and even if more appropriate spaces are available on campus, often they are unavailable for that particular slot or cohort.

Photo by Nathan Dumlao on Unsplash

A smart learning space would taken into account historical usage of the room and how people felt that the space either contributed or hindered the learning taking place there. You can imagine how users of the room could add to a dataset about the activities taking place in the room and how they felt it went.

You would think that data from the timetable could allow for this automatically, but timetabling data tells us about the cohort, the course they are on and the academic leading the session, most timetabling software doesn’t have the granular activity data in it.

The course module information may have the plans of the activity data within it, but may not have the room data from the timetable, nor may it have cohort details. You could easily imagine that some cohorts may be quite happy with undertaking group activities in a lecture theatre space, but there may be other cohorts of students who would work more effectively if the space was better at facilitating the proposed learning activity.

Likewise when it comes to adding feedback about the session, where does that live? What dataset contains that data?

Then there are environmental conditions such as heat, temperature, humidity, CO2 levels, which can also impact on the learning process.

So an actual smart learning space would be able to access data about the session from multiple sources and build a picture of what kinds of learning spaces work best for different kinds of learning activities, taking into account factors such as cohort, environmental conditions, the academic leading the session and so on…

These datasets could also be used to inform future space planning and new builds, but smart learning spaces are only the beginning. Taking a smart space and making it intelligent is an obvious next step.

An intelligent learning space would take this data, and then start to make suggestions based on the data. It would identify possible issues with the learning plan and make recommendations to either change the learning activities planned, or recommend a more appropriate space. An intelligent learning space would adjust the environmental conditions to suit the activities planned for that spaces, rather than users of the space having to manually adjust the conditions when it becomes too cold, too hot, too bright, stuffy, etc….

classroom

Making the timetabling software intelligent, well dynamic, could mean that rooms are not allocated to cohorts of students for a set amount of time, but rooms are allocated based on pedagogical need and student need and done as and when needed.

One of the key issues with all this is to collect and store the data somewhere, a centralised hub would be critical and that is something Jisc have built for the analytics service and would be used for the future Intelligent Campus service.

Intelligent Campus Community Event – City, University of London – 17th January 2019

vine-1010002_1920

If you are working in the area of the Intelligent campus and are interested in work being undertaken in this space by others, then we would like to invite you to attend one of our community events.

The community of practice gives people an opportunity to network, share practice, hear what various institutions are doing and what Jisc is doing in this space.

  • Smart City
  • Smart Campus
  • Wayfinding
  • Wi-Fi Heat Mapping
  • Mapping
  • Space Utilisation
  • Smart Buildings
  • RFID tracking
  • Wi-Fi tracking
  • Facial recognition
  • Chatbots
  • Robots
  • Artificial Intelligence
  • Learning Spaces

The third of these events is being hosted and  taking place at City, University of London on the 17th January 2019 from 10:00 to 4:00, and lunch will be provided.

Please put this date in your diary, you can book onto the event using this link

https://www.eventsforce.net/jiscevents/434/register

You will have the opportunity to discover more about the Jisc project that is being undertaken in the Intelligent Campus space as well as hear from others about their work in this exciting topic. There will be plenty of opportunities for discussion and networking.

Join the project mailing list

Photo by Rafaela Biazi on Unsplash

As the project moves through the various project phases we will use the blog to update members and the community on progress. We are also using the blog to post drafts of documents for comment and review.

We have also created a mailing list for people who are interested in the work we are undertaking, to find out more about the project, and how potentially to get involved in the different phases of the project.

The mailing list can also be the place to discuss issues related to the Intelligent Campus space such as library spaces, learning spaces, the Internet of Things, Wayfinding, WiFi tracking and heat mapping.

We will also use the mailing list to tell people about forthcoming community events, other Jisc events such as Digifest, and other relevant events and workshops.

You can sign up to the mailing list using this link.

Le campus intelligent et l’expérience étudiante

48em ADBU Congrès

It was with a little trepidation that I stood on the stage at the 48em ADBU Congrès to deliver a keynote on the intelligent campus and the student experience. The audience were all French library professionals attending the Congress.

I delivered my presentation in English, and for those who needed it a translation service was available. The presentation covered the background to the Intelligent Campus project and it builds on the existing Jisc analytics service. I briefly covered the service and what it enabled for universities and colleges using the service. I also spoke about how the service can provide data and visualisations to students to improve their own performance.

I described the plan for the technical infrastructure behind the intelligent campus and how the data hub can be used to deliver data to different presentation layers. These presentation layers covered a range of possibilities.

48em ADBU Congrès

Talking about tracking students and gathering other data about student brings the legal and ethical issues to the fore. It is important to think about these issues before moving ahead with analytics. We also considered the technical challenges, can we actually measure some of the things that would provide an useful insight. Are these insights even valid? It was this last point that was picked up in following discussions and presentations at the Congress. Do certain kinds of activities actually help students to achieve and succeed? More research in this space is needed.

Many of the questions at the end of the presentation were similar to questions we’ve had at events in the UK.

Overall my keynote provided an insight into the work Jisc is undertaking in the Intelligent Campus space and how far we have come in the realm of learning analytics.

Location-Aware Applications

This is a guest blog post by Andrew Cormack, chief regulatory adviser, Jisc technologies looking at some of the issues that arise when using location-aware applications.

phone-1458565_1920

Wouldn’t it be great if, when passing the library, your mobile phone reminded you of the books you meant to borrow? Wouldn’t it be scary if your tutor knew everywhere you had been in the past week? Your phone’s ability to determine its own location – whether by GPS or by knowing which access points are within range – creates opportunities for highly beneficial applications, as well as highly intrusive ones. When designing, implementing and choosing location-aware applications several indicators can warn you which of those you may be looking at…

Opt-in vs Invisible?

The first distinction is between an application that the individual user enables, versus one that notes the location of any device within range. Both require clear and accurate descriptions of all the information they access and what it is used for. Clearly it is much easier to provide that as part of an active download/enable process than when an individual simply wanders into a monitored space – just one reason why both law and our instincts regard the former as much more acceptable than the latter.

On-device vs On-server

Another significant difference is where the location information is processed. Applications that run within the device (e.g. the “you’re near the library” example above) are likely to cause fewer concerns than ones that require location to be reported to a central service. Even on-device applications still need to be careful to minimise processing of location data; but central services that know the locations of many devices/people are likely to be expected to provide more safeguards and explanation.

Point vs Track

Applications that involve recording a sequence of locations are likely to be perceived as more intrusive than those that simply record presence. Indeed European legislators are currently debating whether tracking applications that are not Opt-in should be banned. However there are many applications that only need to process a single, current location (again, see the library example) or, indeed, merely the number of devices present in an area (for example to identify where additional wifi coverage might be beneficial!). Since the same technology is used for all these options, applications should include, and describe, safeguards to ensure the broader functionality is not, in fact, used. If you are using technology to count the number of people in location, make sure you describe what prevents the same sensor being used to listen, watch or track them.

Update on the Intelligent Campus hackathon

Following the hackathon, we have published a blog post (on the main Jisc website) on the completed hackathon.

student_hackathon_event

Students working on hackathon pitch by Paul Bailey CC BY-NC-ND

Student ideas become a reality following two app development challenges – We only asked for prototypes – but two student teams in our development competition gave us app store-ready solutions to tackle common campus annoyances.

Libraries at the heart of the institution

As part of a wider Jisc consultation on libraries with key stakeholders I was invited to present some background to the Intelligent Campus

Digital horizon: The intelligent campus

During this session James will provide you with an overview of Jisc’s intelligent campus project, our direction of travel and why.

This was based on previous presentations I have given in this space.

It did raise the notion of the intelligent library and the potential of data and analytics to enhance the library user experience.

As one of the delegates remarked on the Twitter, this can be frightening as much as it can be fascinating.

Hey Siri, what’s my day like today? Alexa what’s my next lesson? Okay Google, are my library books available?

microphone

Voice assistants are becoming not just more widespread, but also much more useful.

Alexa was announced by Amazon in November 2014 alongside the Echo devices, which act as connected speakers and hubs for voice controlled devices. The Echo devices act as connected hubs complete with speakers and in some cases small screens.

Photo by Rahul Chakraborty on Unsplash

Cortana from Microsoft was demonstrated in April 2013, and was released as part of Windows 10 in 2015. In May 2017, Microsoft in collaboration with Harman Kardon announced INVOKE, a voice-activated speaker featuring Cortana.

Bixby from Samsung was announced in March 2017. Unlike other voice assistants Samsung are going to build Bixby into a range of consumers goods such as refrigerators and TVs which they manufacture.

Google has their Google Home which was announced in May 2016 and released in the UK the following year. Google Home speakers enable users to speak voice commands to interact with services through Google’s intelligent personal assistant called Google Assistant.

Photo by Charles Deluvio 🇵🇭🇨🇦 on Unsplash

And of course Siri from Apple. Siri was originally released as a stand-alone application for the iOS operating system in February 2010, but after a buy out from Apple was released as part of the operating system in October 2011. It wasn’t until 2018 that Apple released their own connected speaker hub with the HomePod in February of that year.

Many of these voice assistants started their journey on mobile devices, but over the last few years we have seen connected voice controlled hubs appearing on the market.

An online poll in May 2017 found the most widely used in the US were Apple’s Siri (34%), Google Assistant (19%), Amazon Alexa (6%), and Microsoft Cortana (4%).

Though we might think we want to see how we can embed these into the classroom or education, they are not aimed at this market, they are consumer devices aimed at individuals. Our students are certainly the type of consumers who may purchases these devices and they will want to be able to connect them to the university or college services they use.

chat bot

All the voice assistants require some kind of link to information and in some cases data.

If I ask Alexa to play a particular song, she delves not just into my personal music collection on the Amazon Music app but also what is available through my Prime subscription. If the song isn’t available I could either subscribe to Amazon Music streaming service, or purchase the song.The Alexa ecosystem is built around my Amazon account and the services available to me as a Prime subscriber.

With Google Home I have connected my free Spotify account to it. This is one of the key features of these devices that you can connect services you already subscribe to, so you can control them via voice. Of course the reason I have a free Spotify account is that Google Home would much prefer I was connected to Google Music, and it certainly won’t let me connect to either my home iTunes library (where virtually all my music is) nor to Amazon Music. So when I ask Google Home to play a particular music track, she gets annoyed and says that she can’t as that is only available on Spotify Premium.

This is one of the challenges of these devices that they are quite reliant on subscriptions to other services. Apple’s HomePod only really works if you have an Apple Music subscription.

When it comes to connecting services to voice assistants then are two key challenges, can you get the right data out to the right people, and similarly can you do this for the range of voice assistants available especially when you remember that there is no de facto standard for voice assistants.

It would be useful to know and understand what sorts of questions would be asked of these assistants. There are the known problems, such as where is my next lesson? What books would be useful for this topic? When is my tutor free for a quick chat on assignment? Do I need to come into college today? Even simple questions could result in a complicated route to multiple online systems. Imagine asking the question, where and when is my next lecture, what resources are available and are there any relevant books in the library on this subject? The module design or course information system (or more likely this is a dumb document) would have the information on what would be next. Timetabling systems would be able to inform the learner which space and when the lesson was. Imagine the extra layer of last minute changes to the information because of staff sickness, or building work resulting in a room change. As for what resources are available, this may be on the VLE or another platform. As for additional resources then this could be on the library systems. How would the voice assistant know what to do with this information, could it push the links to a mobile device? Add in a social platform, say a closed Facebook group, or a collaborative tool such as Slack, then you start to see how a simple question about what am I doing next and where is it, becomes rather complicated.

There is though something to be said to ensuring services work with voice assistants, as the same data and information could also be used with chatbot interfaces (ie textual assistants) and with campus bound services such as kiosks or web portals. Get the data right then it’s simple a matter of ensuring the interface to either voice, text or screen is working. Learning analytics services, such as the one we are developing at Jisc, rely on a hub where academic and engagement data is collected, stored and processed. Could we use a similar data structure to build the back end system for chatbots, kiosks and voice assistants?

Could we Siri? Could we?

Looking forward to working with our Intelligent Campus hackathon winning teams next week

See this post from Paul Bailey to find out who got through selection and what they are going to be doing next week.

We’ll keep you updated after the hackathon completes but I think its going to be an interesting week at Aston.

 

The Challenge of the Intelligent Library

shadows

There has been plenty of hype over artificial intelligence and the internet of things. Is it time to put aside the cynicism that this kind of hype generates and look seriously at how we can take advantage of these emerging technologies to improve the student experience and build an intelligent library?

The internet of things makes it possible for us to gather real-time data about the environment and usage of our library spaces.  It is easy to imagine using this data to ensure the library is managed effectively, but could we go further and monitor environmental conditions in the library, or even, using facial recognition software, student reactions as they use the library so that we can continually refine the learning experience?

The background to the Jisc Intelligent Campus project was the basis behind my recent keynote at the CILIP UKeiG Meeting in London on the 26th June 2018.

I discussed firstly the concept of the Intelligent Campus which describes what we at Jisc understand by the term and how it is different to the ideas of a smart campus or smart buildings. I then talked about the Intelligent Library.

creepy library

I also covered the issues in this area, which includes of course not just GDPR and data protection, but also the huge ethical issues that arise when tracking users of the library in not only in what they use and borrow but also their physical movements around the library. I also brought up the technical and validity challenges in using data and analytics and the importance of understanding the narratives behind any data story.

There were some great questions for the audience and a lot of interest in this topic, and the conversation continued over lunch.